[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7404 - Still Unstable!

2018-07-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7404/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImport

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_FCAD03B9A1D96F9-001\tempDir-006\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_FCAD03B9A1D96F9-001\tempDir-006\collection1

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_FCAD03B9A1D96F9-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_FCAD03B9A1D96F9-001\tempDir-006
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_FCAD03B9A1D96F9-001\tempDir-006\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_FCAD03B9A1D96F9-001\tempDir-006\collection1
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_FCAD03B9A1D96F9-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_FCAD03B9A1D96F9-001\tempDir-006

at 
__randomizedtesting.SeedInfo.seed([FCAD03B9A1D96F9:8A66ADA0221208D9]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd$SolrInstance.tearDown(TestSolrEntityProcessorEndToEnd.java:360)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.tearDown(TestSolrEntityProcessorEndToEnd.java:142)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (SOLR-12516) JSON "range" facets can incorrectly refine subfacets for buckets

2018-07-06 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535456#comment-16535456
 ] 

Lucene/Solr QA commented on SOLR-12516:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} SOLR-12516 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12516 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930455/SOLR-12516.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/139/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> JSON "range" facets can incorrectly refine subfacets for buckets
> 
>
> Key: SOLR-12516
> URL: https://issues.apache.org/jira/browse/SOLR-12516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12516.patch, SOLR-12516.patch, SOLR-12516.patch, 
> SOLR-12516.patch, SOLR-12516.patch, SOLR-12516.patch, SOLR-12516.patch
>
>
> while simple {{type:range}} facets don't benefit from refinement, because 
> every shard returns the same set of buckets, some bugs currently exist when a 
> range facet contains sub facets that use refinement:
> # the optional {{other}} buckets (before/after/between) are not considered 
> during refinement
> # when using the {{include}} option: if {{edge}} is specified, then the 
> refinement of all range buckets mistakenly includes the lower bound of the 
> range, regardless of whether {{lower}} was specified.
> 
> #1 occurs because {{FacetRangeMerger extends 
> FacetRequestSortedMerger}} ... however {{FacetRangeMerger}} does 
> not override {{getRefinement(...)}} which means only 
> {{FacetRequestSortedMerger.buckets}} is evaluated and considered for 
> refinement. The additional, special purpose, {{FacetBucket}} instances 
> tracked in {{FacetRangeMerger}} are never considered for refinement.
> #2 exists because of a mistaken in the implementation of {{refineBucket}} and 
> how it computes the {{start}} value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11694) Remove extremely outdated UIMA contrib module

2018-07-06 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535421#comment-16535421
 ] 

Lucene/Solr QA commented on SOLR-11694:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check configsets' lucene version 
{color} | {color:green}  0m 48s{color} | {color:green} the patch passed {color} 
|
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m  6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} Check licenses {color} | {color:red}  0m 
48s{color} | {color:red} Check licenses check-licenses failed {color} |
| {color:red}-1{color} | {color:red} Check licenses {color} | {color:red}  0m 
19s{color} | {color:red} Check licenses check-licenses failed {color} |
| {color:red}-1{color} | {color:red} Validate source patterns {color} | 
{color:red}  0m 48s{color} | {color:red} Check licenses check-licenses failed 
{color} |
| {color:red}-1{color} | {color:red} Validate ref guide {color} | {color:red}  
0m 48s{color} | {color:red} Check licenses check-licenses failed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
31s{color} | {color:green} analysis in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m  
5s{color} | {color:green} core in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  6s{color} 
| {color:red} tools in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-11694 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930389/SOLR-11694.patch |
| Optional Tests |  compile  javac  unit  ratsources  validatesourcepatterns  
checkforbiddenapis  checklicenses  checkluceneversion  validaterefguide  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 6d6e671 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| Check licenses | 
https://builds.apache.org/job/PreCommit-SOLR-Build/138/artifact/out/patch-check-licenses-lucene.txt
 |
| Check licenses | 
https://builds.apache.org/job/PreCommit-SOLR-Build/138/artifact/out/patch-check-licenses-solr.txt
 |
| Validate source patterns | 
https://builds.apache.org/job/PreCommit-SOLR-Build/138/artifact/out/patch-check-licenses-lucene.txt
 |
| Validate ref guide | 
https://builds.apache.org/job/PreCommit-SOLR-Build/138/artifact/out/patch-check-licenses-lucene.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/138/artifact/out/patch-unit-lucene_tools.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/138/testReport/ |
| modules | C: . lucene lucene/analysis lucene/core lucene/tools solr 
solr/server solr/solr-ref-guide U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/138/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Remove extremely outdated UIMA contrib module
> -
>
> Key: SOLR-11694
> URL: https://issues.apache.org/jira/browse/SOLR-11694
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - UIMA
>Reporter: Cassandra Targett
>Assignee: Alexandre Rafalovitch
>Priority: Major
> Attachments: SOLR-11694.patch
>
>
> A user on the 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+21) - Build # 22410 - Unstable!

2018-07-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22410/
Java: 64bit/jdk-11-ea+21 -XX:+UseCompressedOops -XX:+UseG1GC

187 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([11273FDA77E92051]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([11273FDA77E92051]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

Re: [JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 255 - Still Failing

2018-07-06 Thread Chris Hostetter


   [smoker] Releases that don't seem to be tested:
   [smoker]   6.6.5



On Fri, 6 Jul 2018, Apache Jenkins Server wrote:

: Date: Fri, 6 Jul 2018 19:17:14 + (UTC)
: From: Apache Jenkins Server 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: [JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 255 - Still Failing
: 
: Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/255/
: 
: No tests ran.
: 
: Build Log:
: [...truncated 24201 lines...]
: [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
: [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
:  [java] Processed 2227 links (1778 relative) to 3000 anchors in 230 files
:  [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/
: 
: -dist-changes:
:  [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes
: 
: -dist-keys:
:   [get] Getting: http://home.apache.org/keys/group/lucene.asc
:   [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS
: 
: package:
: 
: -unpack-solr-tgz:
: 
: -ensure-solr-tgz-exists:
: [mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
: [untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.5.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
: 
: generate-maven-artifacts:
: 
: resolve:
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: 

[jira] [Updated] (SOLR-12458) ADLS support for SOLR

2018-07-06 Thread Mike Wingert (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated SOLR-12458:

Attachment: SOLR-12458.patch

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535346#comment-16535346
 ] 

Shawn Heisey edited comment on SOLR-12008 at 7/6/18 8:56 PM:
-

The patch applies to branch_7x successfully.

With the patch applied, starting Solr works without the error messages.  But 
when I tried to create a core, I got an exception:

ERROR StatusLogger Unable to access file://resources/log4j2-console.xml

After the stacktrace, an additional message was logged:

ERROR StatusLogger Reconfiguration failed: No configuration found for 
'58372a00' at 'null' in 'null'

It did create the core, despite the error messages.



was (Author: elyograg):
The patch applies to branch_7x successfully.

With the patch applied, starting Solr works.  But when I tried to create a 
core, I got an exception:

ERROR StatusLogger Unable to access file://resources/log4j2-console.xml

After the stacktrace, an additional message was logged:

ERROR StatusLogger Reconfiguration failed: No configuration found for 
'58372a00' at 'null' in 'null'

It did create the core, despite the error messages.


> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535346#comment-16535346
 ] 

Shawn Heisey commented on SOLR-12008:
-

The patch applies to branch_7x successfully.

With the patch applied, starting Solr works.  But when I tried to create a 
core, I got an exception:

ERROR StatusLogger Unable to access file://resources/log4j2-console.xml

After the stacktrace, an additional message was logged:

ERROR StatusLogger Reconfiguration failed: No configuration found for 
'58372a00' at 'null' in 'null'

It did create the core, despite the error messages.


> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8383) Fix computation of mergingBytes in TieredMergePolicy

2018-07-06 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535333#comment-16535333
 ] 

Erick Erickson commented on LUCENE-8383:


[~jpountz] Any objection if I combine this JIRA and LUCENE-8385 in the same 
commit?

> Fix computation of mergingBytes in TieredMergePolicy
> 
>
> Key: LUCENE-8383
> URL: https://issues.apache.org/jira/browse/LUCENE-8383
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-8383.patch
>
>
> It looks like LUCENE-7976 changed mergingBytes to be computed as the sum of 
> the sizes of eligible segments, rather than the sum of the sizes of segments 
> that are currently merging, which feels wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2273 - Unstable!

2018-07-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2273/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle 
this request exceeded,trace=org.apache.solr.client.solrj.SolrServerException: 
Time allowed to handle this request exceeded  at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460)
  at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:279)
  at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:175)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748) ,time=12}

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle 
this request exceeded,trace=org.apache.solr.client.solrj.SolrServerException: 
Time allowed to handle this request exceeded
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460)
at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:279)
at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:175)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
,time=12}
at 
__randomizedtesting.SeedInfo.seed([CE529A2E7A9C3BDA:4606A5F4D4605622]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1192)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1133)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:993)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1034)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

Re: [jira] [Comment Edited] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-06 Thread Erick Erickson
Cool! Thanks!


On Fri, Jul 6, 2018 at 12:59 PM, Shawn Heisey (JIRA)  wrote:
>
> [ 
> https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535270#comment-16535270
>  ]
>
> Shawn Heisey edited comment on SOLR-12008 at 7/6/18 7:58 PM:
> -
>
> I will fiddle with it momentarily.  Windows 7 enterprise.
>
> I will update SOLR-12538 with some detail gleaned from discussion on the 
> log4j list.
>
>
>
> was (Author: elyograg):
> I will fiddle with it momentarily.  Windows 7 enterprise.
>
> I will update SOLR-12358 with some detail gleaned from discussion on the 
> log4j list.
>
>
>> Settle a location for the "correct" log4j2.xml file.
>> 
>>
>> Key: SOLR-12008
>> URL: https://issues.apache.org/jira/browse/SOLR-12008
>> Project: Solr
>>  Issue Type: Improvement
>>  Security Level: Public(Default Security Level. Issues are Public)
>>  Components: logging
>>Reporter: Erick Erickson
>>Assignee: Erick Erickson
>>Priority: Major
>> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch
>>
>>
>> As part of SOLR-11934 I started looking at log4j.properties files. Waaay 
>> back in 2015, the %C in "/solr/server/resources/log4j.properties" was 
>> changed to use %c, but the file in "solr/example/resources/log4j.properties" 
>> was not changed. That got me to looking around and there are a bunch of 
>> log4j.properties files:
>> ./solr/core/src/test-files/log4j.properties
>> ./solr/example/resources/log4j.properties
>> ./solr/solrj/src/test-files/log4j.properties
>> ./solr/server/resources/log4j.properties
>> ./solr/server/scripts/cloud-scripts/log4j.properties
>> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
>> ./solr/contrib/clustering/src/test-files/log4j.properties
>> ./solr/contrib/ltr/src/test-files/log4j.properties
>> ./solr/test-framework/src/test-files/log4j.properties
>> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) 
>> I propose the logging configuration files get consolidated. The question is 
>> "how far"?
>> I at least want to get rid of the one in solr/example, users should use the 
>> one in server/resources. Having to maintain these two separately is asking 
>> for trouble.
>> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
>> server/scripts/cloud-scripts?
>> Anyone else who has a clue about why the other properties files were 
>> created, especially the ones in contrib?
>> And what about all the ones in various test-files directories? People didn't 
>> create them for no reason, and I don't want to rediscover that it's a real 
>> pain to try to re-use the one in server/resources for instance.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12538) log4j exceptions during startup on Windows

2018-07-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535292#comment-16535292
 ] 

Shawn Heisey edited comment on SOLR-12538 at 7/6/18 8:12 PM:
-

Transferring info from discussion on the log4j list:

I think there are two possible solutions for the windows command script:

 * Remove the file: prefix entirely.  Because log4j2 uses a system property 
named "log4j.configurationFile" whereas log4j 1.x used a property named 
"log4j.configuration", this seems like it might be a safe option.  Side note: 
if the "file:" prefix is removed from the sysprop in log4j 1.x, it doesn't 
work.  I have first-hand experience on Linux that confirms this.  I haven't 
tried it on Windows.
 * Change the prefix to "file:///", which seems to be Java's preferred notation 
for absolute paths on Windows.  It doesn't seem to be possible to support a 
relative path on Windows when using the file: URI syntax.  I would need to do 
testing to see whether it's possible without the URI syntax.  Based on how this 
problem behaves, I think it would be possible to use a relative path as long as 
it's a bare path and not a URI.

For the bash script, we don't need to make any changes related to the file: 
prefix at this time, but I can't guarantee that changes won't be required in 
the future.  Currently, a relative path can be supported with the file: URI 
syntax on POSIX platforms, at least as long as it doesn't include the double or 
triple slash.  If we choose to remove the file: prefix for Windows, we should 
probably test the same change for the shell script.



was (Author: elyograg):
Transferring info from discussion on the log4j list:

I think there are two possible solutions for the windows command script:

 * Remove the file: prefix entirely.  Because log4j2 uses a system property 
named "log4j.configurationFile" whereas log4j 1.x used a property named 
"log4j.configuration", this seems like it might be a safe option.  Side note: 
if the "file:" prefix is removed from the sysprop in log4j 1.x, it doesn't 
work.  I have first-hand experience on Linux that confirms this.  I haven't 
tried it on Windows.
 * Change the prefix to "file:///", which seems to be Java's preferred notation 
for absolute paths on Windows.  It doesn't seem to be possible to support a 
relative path on Windows when using the file: URI syntax.  I would need to do 
testing to see whether it's possible without the URI syntax.  Based on how this 
problem behaves, I think it would be possible to use a relative path as long as 
it's a bare path and not a URI.

For the bash script, we don't need to make any changes at this time, but I 
can't guarantee that changes won't be required in the future.  Currently, a 
relative path can be supported with the file: URI syntax on POSIX platforms, at 
least as long as it doesn't include the double or triple slash.  If we choose 
to remove the file: prefix for Windows, we should probably test the same change 
for the shell script.


> log4j exceptions during startup on Windows
> --
>
> Key: SOLR-12538
> URL: https://issues.apache.org/jira/browse/SOLR-12538
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.4
>Reporter: Jakob Furrer
>Priority: Minor
>
> Note that there has been some input regarding this issue on the Solr 
> mailinglist:
>  
> [http://lucene.472066.n3.nabble.com/Logging-fails-when-starting-Solr-in-Windows-using-solr-cmd-td4396671.html]
> Problem description 
>  ==
> System: Microsoft Windows 10 Enterprise Version 10.0.16299 Build 16299
> Steps to reproduce the problem: 
>  1) Download solr-7.4.0.zip
> 2) Unzip to C:\solr-7.4.0
> 3) No changes (configuration or otherwise) whatsoever
> 4) Open cmd.exe
> 5) Execute the following command: *cd c:\solr-7.4.0\bin*
> 6) Execute the following command: *solr.cmd start -p 8983*
> 7) The following console output appears:
> {code:java}
> c:\solr-7.4.0\bin>solr.cmd start -p 8983 
> ERROR StatusLogger Unable to access 
> file:/c:/solr-7.4.0/server/file:c:/solr-7.4.0/server/scripts/cloud-scripts/log4j2.xml
>  
>   java.io.FileNotFoundException: 
> c:\solr-7.4.0\server\file:c:\solr-7.4.0\server\scripts\cloud-scripts\log4j2.xml
>  
> (Die Syntax für den Dateinamen, Verzeichnisnamen oder die 
> Datenträgerbezeichnung ist falsch) 
>          at java.io.FileInputStream.open0(Native Method) 
>          at java.io.FileInputStream.open(FileInputStream.java:195) 
>          at java.io.FileInputStream.(FileInputStream.java:138) 
>          at java.io.FileInputStream.(FileInputStream.java:93) 
>          at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
>  
>          at 

[jira] [Commented] (SOLR-12538) log4j exceptions during startup on Windows

2018-07-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535292#comment-16535292
 ] 

Shawn Heisey commented on SOLR-12538:
-

Transferring info from discussion on the log4j list:

I think there are two possible solutions for the windows command script:

 * Remove the file: prefix entirely.  Because log4j2 uses a system property 
named "log4j.configurationFile" whereas log4j 1.x used a property named 
"log4j.configuration", this seems like it might be a safe option.  Side note: 
if the "file:" prefix is removed from the sysprop in log4j 1.x, it doesn't 
work.  I have first-hand experience on Linux that confirms this.  I haven't 
tried it on Windows.
 * Change the prefix to "file:///", which seems to be Java's preferred notation 
for absolute paths on Windows.  It doesn't seem to be possible to support a 
relative path on Windows when using the file: URI syntax.  I would need to do 
testing to see whether it's possible without the URI syntax.  Based on how this 
problem behaves, I think it would be possible to use a relative path as long as 
it's a bare path and not a URI.

For the bash script, we don't need to make any changes at this time, but I 
can't guarantee that changes won't be required in the future.  Currently, a 
relative path can be supported with the file: URI syntax on POSIX platforms, at 
least as long as it doesn't include the double or triple slash.  If we choose 
to remove the file: prefix for Windows, we should probably test the same change 
for the shell script.


> log4j exceptions during startup on Windows
> --
>
> Key: SOLR-12538
> URL: https://issues.apache.org/jira/browse/SOLR-12538
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.4
>Reporter: Jakob Furrer
>Priority: Minor
>
> Note that there has been some input regarding this issue on the Solr 
> mailinglist:
>  
> [http://lucene.472066.n3.nabble.com/Logging-fails-when-starting-Solr-in-Windows-using-solr-cmd-td4396671.html]
> Problem description 
>  ==
> System: Microsoft Windows 10 Enterprise Version 10.0.16299 Build 16299
> Steps to reproduce the problem: 
>  1) Download solr-7.4.0.zip
> 2) Unzip to C:\solr-7.4.0
> 3) No changes (configuration or otherwise) whatsoever
> 4) Open cmd.exe
> 5) Execute the following command: *cd c:\solr-7.4.0\bin*
> 6) Execute the following command: *solr.cmd start -p 8983*
> 7) The following console output appears:
> {code:java}
> c:\solr-7.4.0\bin>solr.cmd start -p 8983 
> ERROR StatusLogger Unable to access 
> file:/c:/solr-7.4.0/server/file:c:/solr-7.4.0/server/scripts/cloud-scripts/log4j2.xml
>  
>   java.io.FileNotFoundException: 
> c:\solr-7.4.0\server\file:c:\solr-7.4.0\server\scripts\cloud-scripts\log4j2.xml
>  
> (Die Syntax für den Dateinamen, Verzeichnisnamen oder die 
> Datenträgerbezeichnung ist falsch) 
>          at java.io.FileInputStream.open0(Native Method) 
>          at java.io.FileInputStream.open(FileInputStream.java:195) 
>          at java.io.FileInputStream.(FileInputStream.java:138) 
>          at java.io.FileInputStream.(FileInputStream.java:93) 
>          at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
>  
>          at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
>  
>          at java.net.URL.openStream(URL.java:1045) 
>          at 
> org.apache.logging.log4j.core.config.ConfigurationSource.fromUri(ConfigurationSource.java:247)
>  
>          at 
> org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:404)
>  
>          at 
> org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:346)
>  
>          at 
> org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:260)
>  
>          at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:615)
>  
>          at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:636)
>  
>          at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231) 
>          at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
>  
>          at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
>  
>          at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) 
>          at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:121)
>  
>          at 
> 

[jira] [Comment Edited] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535270#comment-16535270
 ] 

Shawn Heisey edited comment on SOLR-12008 at 7/6/18 7:58 PM:
-

I will fiddle with it momentarily.  Windows 7 enterprise.

I will update SOLR-12538 with some detail gleaned from discussion on the log4j 
list.



was (Author: elyograg):
I will fiddle with it momentarily.  Windows 7 enterprise.

I will update SOLR-12358 with some detail gleaned from discussion on the log4j 
list.


> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535270#comment-16535270
 ] 

Shawn Heisey commented on SOLR-12008:
-

I will fiddle with it momentarily.  Windows 7 enterprise.

I will update SOLR-12358 with some detail gleaned from discussion on the log4j 
list.


> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12538) log4j exceptions during startup on Windows

2018-07-06 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535251#comment-16535251
 ] 

Erick Erickson commented on SOLR-12538:
---

OK, let me take a whack at fixing this along with SOLR-12008. IIUC (and I don't 
have a Windows machine to test with, help!) is to substitute
"file:///" for all the occurrences of "file:" in the windows command files, 
correct?

[~shawn.mccorkell] It'd be great if you could pull down the patch on SOLR-12008 
and try it. NOTE: you can't just extract the solr.cmd file and use that since 
the log4j files have moved around and the ones referenced in the current 
solr.cmd are removed. You'll have to apply the patch and compile.

> log4j exceptions during startup on Windows
> --
>
> Key: SOLR-12538
> URL: https://issues.apache.org/jira/browse/SOLR-12538
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.4
>Reporter: Jakob Furrer
>Priority: Minor
>
> Note that there has been some input regarding this issue on the Solr 
> mailinglist:
>  
> [http://lucene.472066.n3.nabble.com/Logging-fails-when-starting-Solr-in-Windows-using-solr-cmd-td4396671.html]
> Problem description 
>  ==
> System: Microsoft Windows 10 Enterprise Version 10.0.16299 Build 16299
> Steps to reproduce the problem: 
>  1) Download solr-7.4.0.zip
> 2) Unzip to C:\solr-7.4.0
> 3) No changes (configuration or otherwise) whatsoever
> 4) Open cmd.exe
> 5) Execute the following command: *cd c:\solr-7.4.0\bin*
> 6) Execute the following command: *solr.cmd start -p 8983*
> 7) The following console output appears:
> {code:java}
> c:\solr-7.4.0\bin>solr.cmd start -p 8983 
> ERROR StatusLogger Unable to access 
> file:/c:/solr-7.4.0/server/file:c:/solr-7.4.0/server/scripts/cloud-scripts/log4j2.xml
>  
>   java.io.FileNotFoundException: 
> c:\solr-7.4.0\server\file:c:\solr-7.4.0\server\scripts\cloud-scripts\log4j2.xml
>  
> (Die Syntax für den Dateinamen, Verzeichnisnamen oder die 
> Datenträgerbezeichnung ist falsch) 
>          at java.io.FileInputStream.open0(Native Method) 
>          at java.io.FileInputStream.open(FileInputStream.java:195) 
>          at java.io.FileInputStream.(FileInputStream.java:138) 
>          at java.io.FileInputStream.(FileInputStream.java:93) 
>          at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
>  
>          at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
>  
>          at java.net.URL.openStream(URL.java:1045) 
>          at 
> org.apache.logging.log4j.core.config.ConfigurationSource.fromUri(ConfigurationSource.java:247)
>  
>          at 
> org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:404)
>  
>          at 
> org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:346)
>  
>          at 
> org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:260)
>  
>          at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:615)
>  
>          at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:636)
>  
>          at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231) 
>          at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
>  
>          at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
>  
>          at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) 
>          at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:121)
>  
>          at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
>  
>          at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:46)
>  
>          at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
>  
>          at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:358) 
>          at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383) 
>          at org.apache.solr.util.SolrCLI.(SolrCLI.java:228) 
> ERROR StatusLogger Unable to access 
> file:/c:/solr-7.4.0/server/file:c:/solr-7.4.0/server/resources/log4j2.xml 
>   java.io.FileNotFoundException: 
> c:\solr-7.4.0\server\file:c:\solr-7.4.0\server\resources\log4j2.xml (Die 
> Syntax für den Dateinamen, Verzeichnisnamen oder die 
> Datenträgerbezeichnung ist falsch) 
>          at java.io.FileInputStream.open0(Native Method) 
>          at 

[jira] [Commented] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-06 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535250#comment-16535250
 ] 

Erick Erickson commented on SOLR-12008:
---

I think this is ready, unless I made some silly error. Places that I could use 
some help testing:

- windows solr.bat files.
- shapshotscli.sh
- solr-exporter
- DIH

The thing to watch out for is if any log file is created in something very 
funky like {{ ${sys:solr.log.dir}/solr.log }}

The problem here is what I described above and it just means that the script 
should point to server/resources/log4j2-console.xml

I messed up the Windows commands earlier and they all pointed to log4j2.xml, 
which would have put files, well, somewhere wrong.

Since this isn't pushed yet, it is not responsible for SOLR-12538. However, 
since I'm in here anyway, I'll incorporate all the /// in the specs for Windows 
command files and maybe kill two birds with one stone.

The first patch today is without any attempt to fix SOLR-12538, the second 
_does_ try to fix it.

[~varunthacker] I couldn't reproduce your issue with IntelliJ.

I'd really like to push this early next week, so if someone who has Windows 
readily available could give it a whirl I'd appreciate.

I haven't precommitted or tested this since these latest changes, will be doing 
that momentarily.

> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-06 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12008:
--
Attachment: SOLR-12008.patch

> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 255 - Still Failing

2018-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/255/

No tests ran.

Build Log:
[...truncated 24201 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2227 links (1778 relative) to 3000 anchors in 230 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.5.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:

[jira] [Updated] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-06 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12008:
--
Attachment: SOLR-12008.patch

> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 256 - Unstable

2018-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/256/

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:52485/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:37237/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:52485/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:37237/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([99FDE7D02B778ECA:333034229CA45B1A]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Resolved] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8388.
---
   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: LUCENE-8388-7x.patch, LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535138#comment-16535138
 ] 

ASF subversion and git services commented on LUCENE-8388:
-

Commit 9a6706ed32646e74fb64a8b2caa05fd6bc7e8a35 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9a6706e ]

LUCENE-8388: Deprecate PostingsEnum#attributes()


> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8388-7x.patch, LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535139#comment-16535139
 ] 

ASF subversion and git services commented on LUCENE-8388:
-

Commit 6d6e67140b44dfb45bd8aadc58e3b8bfb79f5016 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6d6e671 ]

LUCENE-8388: Remove PostingsEnum#attributes()


> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8388-7x.patch, LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7403 - Still Unstable!

2018-07-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7403/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseParallelGC

8 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=27482, 
name=cdcr-replicator-11152-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=27482, name=cdcr-replicator-11152-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError: 1605258586947059712 != 1605258586607321088
at __randomizedtesting.SeedInfo.seed([EA490BEBDFE12ECC]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.handler.TestSQLHandler.doTest

Error Message:
--> http://127.0.0.1:56436/collection1_shard2_replica_n41:Failed to execute 
sqlQuery 'select id, field_i, str_s from collection1 where (text='()' OR 
text='') AND text='' order by field_i desc' against JDBC connection 
'jdbc:calcitesolr:'. Error while executing SQL "select id, field_i, str_s from 
collection1 where (text='()' OR text='') AND text='' order by 
field_i desc": java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: --> 
http://127.0.0.1:56436/collection1_shard2_replica_n41/:id must have DocValues 
to use this feature.

Stack Trace:
java.io.IOException: --> 
http://127.0.0.1:56436/collection1_shard2_replica_n41:Failed to execute 
sqlQuery 'select id, field_i, str_s from collection1 where (text='()' OR 
text='') AND text='' order by field_i desc' against JDBC connection 
'jdbc:calcitesolr:'.
Error while executing SQL "select id, field_i, str_s from collection1 where 
(text='()' OR text='') AND text='' order by field_i desc": 
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: --> 
http://127.0.0.1:56436/collection1_shard2_replica_n41/:id must have DocValues 
to use this feature.
at 
__randomizedtesting.SeedInfo.seed([EA490BEBDFE12ECC:4D0DB34FB25A3D75]:0)
at 
org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:217)
at 
org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2523)
at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:125)
at org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: lucene-solr:master: SOLR-12427: Correct status for invalid 'start', 'rows'

2018-07-06 Thread Chris Hostetter


these tests should really be using...

  SolrException e = expectThrows(() -> {...});

...and ideally we should be making assertions about the exception message 
as well (ie: does it say what we expect it to say? does it give the user 
the context of the failure -- ie: containing the "non_numeric_value" so 
they know what they did wrong?


:private void validateCommonQueryParameters() throws Exception {
:  ignoreException("parameter cannot be negative");
: +
: +try {
: +  SolrQuery query = new SolrQuery();
: +  query.setParam("start", "non_numeric_value").setQuery("*");
: +  QueryResponse resp = query(query);
: +  fail("Expected the last query to fail, but got response: " + resp);
: +} catch (SolrException e) {
: +  assertEquals(ErrorCode.BAD_REQUEST.code, e.code());
: +}
: +
:  try {
:SolrQuery query = new SolrQuery();
:query.setStart(-1).setQuery("*");
: @@ -1228,6 +1238,15 @@ public class TestDistributedSearch extends 
BaseDistributedSearchTestCase {
:  } catch (SolrException e) {
:assertEquals(ErrorCode.BAD_REQUEST.code, e.code());
:  }
: +
: +try {
: +  SolrQuery query = new SolrQuery();
: +  query.setParam("rows", "non_numeric_value").setQuery("*");
: +  QueryResponse resp = query(query);
: +  fail("Expected the last query to fail, but got response: " + resp);
: +} catch (SolrException e) {
: +  assertEquals(ErrorCode.BAD_REQUEST.code, e.code());
: +}
:  resetExceptionIgnores();
:}
:  }
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12516) JSON "range" facets can incorrectly refine subfacets for buckets

2018-07-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535103#comment-16535103
 ] 

ASF subversion and git services commented on SOLR-12516:


Commit b3896b4eba8bb820f265205ddd05bccb98cdd801 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b3896b4 ]

SOLR-12516: Fix some bugs in 'type:range' Facet refinement when sub-facets are 
combined with non default values for the 'other' and 'include' options.

1) the optional other buckets (before/after/between) are not considered during 
refinement

2) when using the include option: if edge is specified, then the refinement of 
all range buckets mistakenly includes the lower bound of the range, regardless 
of whether lower was specified.

(cherry picked from commit 7d8ef9e39d3321a5366fcfe1a358ec015fb7b8b1)


> JSON "range" facets can incorrectly refine subfacets for buckets
> 
>
> Key: SOLR-12516
> URL: https://issues.apache.org/jira/browse/SOLR-12516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12516.patch, SOLR-12516.patch, SOLR-12516.patch, 
> SOLR-12516.patch, SOLR-12516.patch, SOLR-12516.patch, SOLR-12516.patch
>
>
> while simple {{type:range}} facets don't benefit from refinement, because 
> every shard returns the same set of buckets, some bugs currently exist when a 
> range facet contains sub facets that use refinement:
> # the optional {{other}} buckets (before/after/between) are not considered 
> during refinement
> # when using the {{include}} option: if {{edge}} is specified, then the 
> refinement of all range buckets mistakenly includes the lower bound of the 
> range, regardless of whether {{lower}} was specified.
> 
> #1 occurs because {{FacetRangeMerger extends 
> FacetRequestSortedMerger}} ... however {{FacetRangeMerger}} does 
> not override {{getRefinement(...)}} which means only 
> {{FacetRequestSortedMerger.buckets}} is evaluated and considered for 
> refinement. The additional, special purpose, {{FacetBucket}} instances 
> tracked in {{FacetRangeMerger}} are never considered for refinement.
> #2 exists because of a mistaken in the implementation of {{refineBucket}} and 
> how it computes the {{start}} value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6010) Wrong highlighting while querying by date range with wild card in the end range

2018-07-06 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-6010.
--

> Wrong highlighting while querying by date range with wild card in the end 
> range
> ---
>
> Key: SOLR-6010
> URL: https://issues.apache.org/jira/browse/SOLR-6010
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter, query parsers
>Affects Versions: 4.0
> Environment: java version "1.7.0_45"
> Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
> Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
> Linux 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 
> x86_64 x86_64 GNU/Linux
>Reporter: Mohammad Abul Khaer
>Priority: Major
>  Labels: date, highlighting, range, solr
>
> Solr is returning wrong highlights when I have a date range query with wild 
> card *in the end range*. For example my query *q* is
> {noformat}
> (porta)+activatedate:[* TO 
> 2014-04-24T09:55:00Z]+expiredate:[2014-04-24T09:55:00Z TO *]
> {noformat}
> In the above query activatedate, expiredate are date fields. Their definition 
> in schema file is as follows
> {code}
> omitNorms="true"/>
> omitNorms="true"/>
> {code}
> In the query result I am getting wrong highlighting information. Only 
> highlighting result is show below
> {code}
>  "highlighting": {
> "article:3605": {
>   "title": [
> "The creative headline of this story 
> really says it all"
>   ],
>   "summary": [
> "Etiam porta sem malesuada 
> magna mollis euismod aenean eu 
> leo quam. Pellentesque ornare 
> sem lacinia quam."
>   ]
> },
> "article:3604": {
>   "title": [
> "The creative headline of this story 
> really says it all"
>   ],
>   "summary": [
> "Etiam porta sem malesuada 
> magna mollis euismod aenean eu 
> leo quam. Pellentesque ornare 
> sem lacinia quam.."
>   ]
> }
> }
> {code}
> It should highlight only *story* word but it is highlighting lot other words 
> also. What I noticed that this happens only if I have a wildcard * in the end 
> range. If I change the above query and set a fixed date in the end range 
> instead of * then solr return correct highlights. Modified query is shown 
> below - 
> {noformat}
> (porta)+activatedate:[* TO 
> 2014-04-24T09:55:00Z]+expiredate:[2014-04-24T09:55:00Z TO 
> 3014-04-24T09:55:00Z]
> {noformat}
> I guess its a bug in SOLR. If I use filter query *fq* instead of normal query 
> *q* then highlighting result is OK for both queries.
> *Update*
> If I use a specific date instead of * still it returns wrong highlights. This 
> time it highlights numbers also. Say I am searching for the word *math* then 
> it also highlights number with *math*. As for example if title of my article 
> is *Mathematics 1234* then it highlights *1234* also with *math*.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6010) Wrong highlighting while querying by date range with wild card in the end range

2018-07-06 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-6010.

Resolution: Not A Problem

While I didn't try to reproduce this, I think the problem won't occur in Solr 7 
with DatePointField (replaces older TrieDateField).  Even with TrieDateField, 
if you use the hl.method=unified (UnifiedHighlighter) in 6.x this problem would 
not occur.  It wouldn't happen with the fast vector highlighter either.  I 
could imagine the original highlighter still does or used to exhibit this 
problem.  Varun's right on hl.requireFieldMatch which probably should have been 
defaulted to true but alas defaults to false.  You got no results probably 
because of a mismatch between the field you are highlighting and the field 
referenced in your query.

I'm going to mark closed.

> Wrong highlighting while querying by date range with wild card in the end 
> range
> ---
>
> Key: SOLR-6010
> URL: https://issues.apache.org/jira/browse/SOLR-6010
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter, query parsers
>Affects Versions: 4.0
> Environment: java version "1.7.0_45"
> Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
> Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
> Linux 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 
> x86_64 x86_64 GNU/Linux
>Reporter: Mohammad Abul Khaer
>Priority: Major
>  Labels: date, highlighting, range, solr
>
> Solr is returning wrong highlights when I have a date range query with wild 
> card *in the end range*. For example my query *q* is
> {noformat}
> (porta)+activatedate:[* TO 
> 2014-04-24T09:55:00Z]+expiredate:[2014-04-24T09:55:00Z TO *]
> {noformat}
> In the above query activatedate, expiredate are date fields. Their definition 
> in schema file is as follows
> {code}
> omitNorms="true"/>
> omitNorms="true"/>
> {code}
> In the query result I am getting wrong highlighting information. Only 
> highlighting result is show below
> {code}
>  "highlighting": {
> "article:3605": {
>   "title": [
> "The creative headline of this story 
> really says it all"
>   ],
>   "summary": [
> "Etiam porta sem malesuada 
> magna mollis euismod aenean eu 
> leo quam. Pellentesque ornare 
> sem lacinia quam."
>   ]
> },
> "article:3604": {
>   "title": [
> "The creative headline of this story 
> really says it all"
>   ],
>   "summary": [
> "Etiam porta sem malesuada 
> magna mollis euismod aenean eu 
> leo quam. Pellentesque ornare 
> sem lacinia quam.."
>   ]
> }
> }
> {code}
> It should highlight only *story* word but it is highlighting lot other words 
> also. What I noticed that this happens only if I have a wildcard * in the end 
> range. If I change the above query and set a fixed date in the end range 
> instead of * then solr return correct highlights. Modified query is shown 
> below - 
> {noformat}
> (porta)+activatedate:[* TO 
> 2014-04-24T09:55:00Z]+expiredate:[2014-04-24T09:55:00Z TO 
> 3014-04-24T09:55:00Z]
> {noformat}
> I guess its a bug in SOLR. If I use filter query *fq* instead of normal query 
> *q* then highlighting result is OK for both queries.
> *Update*
> If I use a specific date instead of * still it returns wrong highlights. This 
> time it highlights numbers also. Say I am searching for the word *math* then 
> it also highlights number with *math*. As for example if title of my article 
> is *Mathematics 1234* then it highlights *1234* also with *math*.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12516) JSON "range" facets can incorrectly refine subfacets for buckets

2018-07-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535086#comment-16535086
 ] 

ASF subversion and git services commented on SOLR-12516:


Commit 7d8ef9e39d3321a5366fcfe1a358ec015fb7b8b1 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d8ef9e ]

SOLR-12516: Fix some bugs in 'type:range' Facet refinement when sub-facets are 
combined with non default values for the 'other' and 'include' options.

1) the optional other buckets (before/after/between) are not considered during 
refinement

2) when using the include option: if edge is specified, then the refinement of 
all range buckets mistakenly includes the lower bound of the range, regardless 
of whether lower was specified.


> JSON "range" facets can incorrectly refine subfacets for buckets
> 
>
> Key: SOLR-12516
> URL: https://issues.apache.org/jira/browse/SOLR-12516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12516.patch, SOLR-12516.patch, SOLR-12516.patch, 
> SOLR-12516.patch, SOLR-12516.patch, SOLR-12516.patch, SOLR-12516.patch
>
>
> while simple {{type:range}} facets don't benefit from refinement, because 
> every shard returns the same set of buckets, some bugs currently exist when a 
> range facet contains sub facets that use refinement:
> # the optional {{other}} buckets (before/after/between) are not considered 
> during refinement
> # when using the {{include}} option: if {{edge}} is specified, then the 
> refinement of all range buckets mistakenly includes the lower bound of the 
> range, regardless of whether {{lower}} was specified.
> 
> #1 occurs because {{FacetRangeMerger extends 
> FacetRequestSortedMerger}} ... however {{FacetRangeMerger}} does 
> not override {{getRefinement(...)}} which means only 
> {{FacetRequestSortedMerger.buckets}} is evaluated and considered for 
> refinement. The additional, special purpose, {{FacetBucket}} instances 
> tracked in {{FacetRangeMerger}} are never considered for refinement.
> #2 exists because of a mistaken in the implementation of {{refineBucket}} and 
> how it computes the {{start}} value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+21) - Build # 22408 - Unstable!

2018-07-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22408/
Java: 64bit/jdk-11-ea+21 -XX:+UseCompressedOops -XX:+UseSerialGC

73 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestConfigSetsAPI

Error Message:
4 threads leaked from SUITE scope at org.apache.solr.cloud.TestConfigSetsAPI:   
  1) Thread[id=462, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestConfigSetsAPI] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)2) 
Thread[id=465, 
name=TEST-TestConfigSetsAPI.testUploadErrors-seed#[B0877905C6CE4FEB]-EventThread,
 state=WAITING, group=TGRP-TestConfigSetsAPI] at 
java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)3) 
Thread[id=466, name=zkConnectionManagerCallback-193-thread-1, state=WAITING, 
group=TGRP-TestConfigSetsAPI] at 
java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)4) 
Thread[id=464, 
name=TEST-TestConfigSetsAPI.testUploadErrors-seed#[B0877905C6CE4FEB]-SendThread(127.0.0.1:36567),
 state=TIMED_WAITING, group=TGRP-TestConfigSetsAPI] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 4 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestConfigSetsAPI: 
   1) Thread[id=462, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestConfigSetsAPI]
at java.base@11-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@11-ea/java.lang.Thread.run(Thread.java:834)
   2) Thread[id=465, 
name=TEST-TestConfigSetsAPI.testUploadErrors-seed#[B0877905C6CE4FEB]-EventThread,
 state=WAITING, group=TGRP-TestConfigSetsAPI]
at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
   3) Thread[id=466, name=zkConnectionManagerCallback-193-thread-1, 
state=WAITING, group=TGRP-TestConfigSetsAPI]
at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base@11-ea/java.lang.Thread.run(Thread.java:834)
   4) Thread[id=464, 
name=TEST-TestConfigSetsAPI.testUploadErrors-seed#[B0877905C6CE4FEB]-SendThread(127.0.0.1:36567),
 state=TIMED_WAITING, group=TGRP-TestConfigSetsAPI]
at 

[GitHub] lucene-solr issue #363: SOLR-12276

2018-07-06 Thread jdyer1
Github user jdyer1 commented on the issue:

https://github.com/apache/lucene-solr/pull/363
  
@dataminion Also, if you decide to jump in, I have some basic getting 
started information in a readme file here: 


https://github.com/jdyer1/lucene-solr/blob/feature/angular-conversion-solr-admin/solr/webapp/README.md



---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #363: SOLR-12276

2018-07-06 Thread jdyer1
Github user jdyer1 commented on the issue:

https://github.com/apache/lucene-solr/pull/363
  
@dataminion , Feel free to jump in.  I am an absolute beginner with Angular 
(new & old) and Typescript, but thought this would be a good learning project.  
So far I've done a fairly rote conversion of:

- Dashboard
- Collections Overview
- Analysis
- Dataimport Handler

There's a lot to go and I've been working here and there as time allows.  
Certainly if this is going to be finished anytime soon, we need more help.  You 
also, if you've got the experience, can evaluate my approach here and see if 
any changes need to be made on that end.



---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8306) Allow iteration over the term positions of a Match

2018-07-06 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534955#comment-16534955
 ] 

David Smiley commented on LUCENE-8306:
--

Overall sounds very good!
What is TermPostingsEnum?  No such class exists in Lucene.  Might it be a 
subclass of PostingsEnum that has a getTerm(), and if so would it vary as you 
iterate an aggregating PostingsEnum or would this Matches.getTermMatches need 
to return a collection of these enums?

> Allow iteration over the term positions of a Match
> --
>
> Key: LUCENE-8306
> URL: https://issues.apache.org/jira/browse/LUCENE-8306
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8306.patch, LUCENE-8306.patch
>
>
> For multi-term queries such as phrase queries, the matches API currently just 
> returns information about the span of the whole match.  It would be useful to 
> also expose information about the matching terms within the phrase.  The same 
> would apply to Spans and Interval queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8229) Add a method to Weight to retrieve matches for a single document

2018-07-06 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534937#comment-16534937
 ] 

David Smiley commented on LUCENE-8229:
--

I was just looking at Matches.MatchesIteratorSupplier.  It's a shame to need 
mirror images of existing java.util.function interfaces that only differ in 
that it throws IOException.  See org.apache.lucene.util.IOUtils.IOConsumer 
added by [~simonw] recently.  I propose that we add an IOSupplier here and get 
rid of MatchesIteratorSupplier (in a new issue of course).  WDYT?  We ought to 
have a consistent approach in Lucene to this scenario.  I've wanted an 
IOSupplier in Solr for something recently and saw it hadn't been added yet.

> Add a method to Weight to retrieve matches for a single document
> 
>
> Key: LUCENE-8229
> URL: https://issues.apache.org/jira/browse/LUCENE-8229
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8229.patch, LUCENE-8229_small_improvements.patch
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> The ability to find out exactly what a query has matched on is a fairly 
> frequent feature request, and would also make highlighters much easier to 
> implement.  There have been a few attempts at doing this, including adding 
> positions to Scorers, or re-writing queries as Spans, but these all either 
> compromise general performance or involve up-front knowledge of all queries.
> Instead, I propose adding a method to Weight that exposes an iterator over 
> matches in a particular document and field.  It should be used in a similar 
> manner to explain() - ie, just for TopDocs, not as part of the scoring loop, 
> which relieves some of the pressure on performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534924#comment-16534924
 ] 

David Smiley commented on LUCENE-8388:
--

+1 thanks for the cleanup.

> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8388-7x.patch, LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534900#comment-16534900
 ] 

Uwe Schindler commented on LUCENE-8388:
---

FYI, On postingsenum the idea was to replace "payloads" with something 
structured - which was never implemented.

> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8388-7x.patch, LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534866#comment-16534866
 ] 

Uwe Schindler commented on LUCENE-8388:
---

Hi, +1 to remove.

Just some background: In Lucene 4 the idea was to have the well known 
attributes on any internal iterator, so you can store additional information on 
the interator, like for TokenStreams. As this was never implemented (no calls) 
and we have no way to serialize the attributes, this is not really useful.

On TermsEnum, we currently use attributes in FuzzyQuery, to store some term 
metadata (like the fuzzy boost) on the enum. But anywhere else, it is also not 
used. One idea was to allow storing extra term metadata like payloads on the 
terms itsself.

Uwe

> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8388-7x.patch, LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #:

2018-07-06 Thread xiaoshi2013
Github user xiaoshi2013 commented on the pull request:


https://github.com/apache/lucene-solr/commit/01d12777c4bcab7ae8085d5ed5e1b20a0e1a5526#commitcomment-29622290
  
Very good


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8388:
--
Attachment: LUCENE-8388-7x.patch

> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8388-7x.patch, LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534848#comment-16534848
 ] 

Alan Woodward commented on LUCENE-8388:
---

Separate patch deprecating the method for 7x

> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8388-7x.patch, LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534845#comment-16534845
 ] 

Alan Woodward commented on LUCENE-8388:
---

Patch against master.  This method isn't even called anywhere in tests...

> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8388:
--
Attachment: LUCENE-8388.patch

> Deprecate and remove PostingsEnum#attributes()
> --
>
> Key: LUCENE-8388
> URL: https://issues.apache.org/jira/browse/LUCENE-8388
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8388.patch
>
>
> This method isn't used anywhere in the codebase, and seems to be entirely 
> useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-06 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534836#comment-16534836
 ] 

Uwe Schindler edited comment on LUCENE-8389 at 7/6/18 1:53 PM:
---

You are not fully precise what memory is used out, so your question is unclear.

If you give 16 GiB of heap, JIRA and Lucene it will of course use it out - 
thats fine. This has nothing to do with MMapDirectory (see blog post), because 
the filesystem cache is outside of heap, so Lucene uses lots of off-heap space, 
but that's just virtual and has nothing to do with allocated heap space. If you 
run TOP on your Linux installation you will see a column "RES", which should be 
around the heap size plus a bit or extra memory (around 20 GiB) heap. In 
addition the column "VIRT" of top is showing the reserved address space, which 
will be RES plus the size of all open indexes. Depending on index size this can 
be up to several hundreds of gigabytes.

The general rule is: Keep a MINIMUM of 50% of physical RAM free to allow file 
system caching. So if you use 16 GiB heap space, you should have at least 32 
GiB, better 48 GiB of physical RAM in the machine (depending on index size).

So high VM usage in VIRT is wanted and fine. If JIRA is taking lots of heap 
space and really needs 16 GiB this is not our problem, so ask JIRA for help. I 
will close this bug report as "Not a bug", as it's not our issue. 

There are rumors that JIRA will update the Lucene support to a later version, 
maybe this will help. Lucene 3.3 is out of maintenance since 6 years.


was (Author: thetaphi):
You are not fully precise what memory is used out, so your question is unclear.

If you give 16 GiB of heap, JIRA and Lucene it will of course use it out - 
thats fine. This has nothing to do with MMapDirectory (see blog post), because 
the filesystem cache is outside of heap, so Lucene uses lots of off-heap space, 
but that's just virtual and has nothing to do with allocated heap space. If you 
run TOP on your Linux installation you will see a column "RES", which should be 
around the heap size plus a bit or extra memory (around 20 GiB) heap. In 
addition the column "VIRT" of top is showing the reserved address space, which 
will be RES plus the size of all open indexes. Depending on index size this can 
be up to several hundreds of gigabytes.

The general rule is: Keep a MINIMUM of 50% of physical RAM free to allow file 
system caching. So if you use 16 GiB cache, you should have at least 32 GiB, 
better 48 GiB of physical RAM in the machine (depending on index size).

So high VM usage in VIRT is wanted and fine. If JIRA is taking lots of heap 
space and really needs 16 GiB this is not our problem, so ask JIRA for help. I 
will close this bug report as "Not a bug", as it's not our issue. 

There are rumors that JIRA will update the Lucene support to a later version, 
maybe this will help. Lucene 3.3 is out of maintenance since 6 years.

> Could not limit Lucene's memory consumption
> ---
>
> Key: LUCENE-8389
> URL: https://issues.apache.org/jira/browse/LUCENE-8389
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
> Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
>Reporter: changchun huang
>Assignee: Uwe Schindler
>Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-06 Thread Uwe Schindler (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-8389.
---
Resolution: Not A Bug
  Assignee: Uwe Schindler

> Could not limit Lucene's memory consumption
> ---
>
> Key: LUCENE-8389
> URL: https://issues.apache.org/jira/browse/LUCENE-8389
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
> Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
>Reporter: changchun huang
>Assignee: Uwe Schindler
>Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-06 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534836#comment-16534836
 ] 

Uwe Schindler edited comment on LUCENE-8389 at 7/6/18 1:52 PM:
---

You are not fully precise what memory is used out, so your question is unclear.

If you give 16 GiB of heap, JIRA and Lucene it will of course use it out - 
thats fine. This has nothing to do with MMapDirectory (see blog post), because 
the filesystem cache is outside of heap, so Lucene uses lots of off-heap space, 
but that's just virtual and has nothing to do with allocated heap space. If you 
run TOP on your Linux installation you will see a column "RES", which should be 
around the heap size plus a bit or extra memory (around 20 GiB) heap. In 
addition the column "VIRT" of top is showing the reserved address space, which 
will be RES plus the size of all open indexes. Depending on index size this can 
be up to several hundreds of gigabytes.

The general rule is: Keep a MINIMUM of 50% of physical RAM free to allow file 
system caching. So if you use 16 GiB cache, you should have at least 32 GiB, 
better 48 GiB of physical RAM in the machine (depending on index size).

So high VM usage in VIRT is wanted and fine. If JIRA is taking lots of heap 
space and really needs 16 GiB this is not our problem, so ask JIRA for help. I 
will close this bug report as "Not a bug", as it's not our issue. 

There are rumors that JIRA will update the Lucene support to a later version, 
maybe this will help. Lucene 3.3 is out of maintenance since 6 years.


was (Author: thetaphi):
You are not fully precise what memory is used out, so your question is unclear.

If you give 16 GiB of heap, JIRA and Lucene it will of course use it out - 
thats fine. This has nothing to do with MMapDirectory (see blog post), because 
the filesystem cache is outside of heap, so Lucene uses lots of off-heap space, 
but that's just virtual and has nothing to do with allocated heap space. If you 
run TOP on your Linux installation you will see a column "RES", which should be 
around the heap size plus a bit or extra memory (around 20 GiB) heap. In 
addition the column "VIRT" of top is showing the reserved address space, which 
will be RES plus the size of all open indexes. Depending on index size this can 
be up to several hundreds of gigabytes.

The general rule is: Keep a MINIMUM of 50% of physical RAM free to allow file 
system caching. So if you use 16 GiB cache, you should have at least 32 GiB, 
better 48 GiB of physical RAM in the machine (depending on index size).

So high VM usage in VIRT is wanted and fine. If JIRA is taking lots of heap 
space and really needs 16 GiB this is not our problem, so ask JIRA for help. I 
will close this bug report as "won't fix", as it's not our issue. 

There are rumors that JIRA will update the Lucene support to a later version, 
maybe this will help. Lucene 3.3 is out of maintenance since 6 years.

> Could not limit Lucene's memory consumption
> ---
>
> Key: LUCENE-8389
> URL: https://issues.apache.org/jira/browse/LUCENE-8389
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
> Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
>Reporter: changchun huang
>Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-06 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534836#comment-16534836
 ] 

Uwe Schindler commented on LUCENE-8389:
---

You are not fully precise what memory is used out, so your question is unclear.

If you give 16 GiB of heap, JIRA and Lucene it will of course use it out - 
thats fine. This has nothing to do with MMapDirectory (see blog post), because 
the filesystem cache is outside of heap, so Lucene uses lots of off-heap space, 
but that's just virtual and has nothing to do with allocated heap space. If you 
run TOP on your Linux installation you will see a column "RES", which should be 
around the heap size plus a bit or extra memory (around 20 GiB) heap. In 
addition the column "VIRT" of top is showing the reserved address space, which 
will be RES plus the size of all open indexes. Depending on index size this can 
be up to several hundreds of gigabytes.

The general rule is: Keep a MINIMUM of 50% of physical RAM free to allow file 
system caching. So if you use 16 GiB cache, you should have at least 32 GiB, 
better 48 GiB of physical RAM in the machine (depending on index size).

So high VM usage in VIRT is wanted and fine. If JIRA is taking lots of heap 
space and really needs 16 GiB this is not our problem, so ask JIRA for help. I 
will close this bug report as "won't fix", as it's not our issue. 

There are rumors that JIRA will update the Lucene support to a later version, 
maybe this will help. Lucene 3.3 is out of maintenance since 6 years.

> Could not limit Lucene's memory consumption
> ---
>
> Key: LUCENE-8389
> URL: https://issues.apache.org/jira/browse/LUCENE-8389
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
> Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
>Reporter: changchun huang
>Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Created] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-06 Thread Michael Sokolov
You should really try asking on an Atlassian support forum since Jira is
their project and they support it. This bug database is for tracking issues
about Lucene itself. Also please note that Lucene 3 is many years old now,
and no longer receiving bug fixes. The current version is 7, soon to be 8,
so even if there is a real issue with that version of Lucene, you are
unlikely to get much help with it now.

On Fri, Jul 6, 2018, 5:17 AM changchun huang (JIRA)  wrote:

> changchun huang created LUCENE-8389:
> ---
>
>  Summary: Could not limit Lucene's memory consumption
>  Key: LUCENE-8389
>  URL: https://issues.apache.org/jira/browse/LUCENE-8389
>  Project: Lucene - Core
>   Issue Type: Bug
>   Components: core/index
> Affects Versions: 3.3
>  Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision:
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
> Reporter: changchun huang
>
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
>
> We configured 16GB Jira heap on 64GB server
>
> However, each time, when we run background re-index, the memory will be
> used out by Lucene and we could not only limit its memory consumption.
>
> This definitely will cause overall performance issue on a system with
> heavy load.
>
> We have around 500 concurrent users, 400K issues.
>
> Could you please help to advice if there were workaround  or fix for this?
>
> Thanks.
>
>
>
> BTW: I did check a lot and found a blog introducing the new behavior of
> Lucene 3.3
>
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>
>
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+14) - Build # 2270 - Unstable!

2018-07-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2270/
Java: 64bit/jdk-11-ea+14 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaOnIndexing

Error Message:
Captured an uncaught exception in thread: Thread[id=16109, 
name=updateExecutor-4432-thread-25, state=RUNNABLE, 
group=TGRP-DeleteReplicaTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=16109, name=updateExecutor-4432-thread-25, 
state=RUNNABLE, group=TGRP-DeleteReplicaTest]
at 
__randomizedtesting.SeedInfo.seed([5596094ECEF27705:2CED25C16D6FBD26]:0)
Caused by: org.apache.solr.common.SolrException: Replica: 
http://127.0.0.1:40373/solr/deleteReplicaOnIndexing_shard1_replica_n2/ should 
have been marked under leader initiated recovery in ZkController but wasn't.
at __randomizedtesting.SeedInfo.seed([5596094ECEF27705]:0)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryThread.run(LeaderInitiatedRecoveryThread.java:90)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:832)




Build Log:
[...truncated 13880 lines...]
   [junit4] Suite: org.apache.solr.cloud.DeleteReplicaTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.DeleteReplicaTest_5596094ECEF27705-001/init-core-data-001
   [junit4]   2> 1188306 WARN  
(SUITE-DeleteReplicaTest-seed#[5596094ECEF27705]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=13 numCloses=13
   [junit4]   2> 1188306 INFO  
(SUITE-DeleteReplicaTest-seed#[5596094ECEF27705]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1188307 INFO  
(SUITE-DeleteReplicaTest-seed#[5596094ECEF27705]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason="", value=0.0/0.0, ssl=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 1188307 INFO  
(SUITE-DeleteReplicaTest-seed#[5596094ECEF27705]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 1188307 INFO  
(SUITE-DeleteReplicaTest-seed#[5596094ECEF27705]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 4 servers in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.DeleteReplicaTest_5596094ECEF27705-001/tempDir-001
   [junit4]   2> 1188307 INFO  
(SUITE-DeleteReplicaTest-seed#[5596094ECEF27705]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1188307 INFO  (Thread-3276) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1188307 INFO  (Thread-3276) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1188309 ERROR (Thread-3276) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1188407 INFO  
(SUITE-DeleteReplicaTest-seed#[5596094ECEF27705]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:39221
   [junit4]   2> 1188409 INFO  (zkConnectionManagerCallback-4325-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1188411 INFO  (jetty-launcher-4322-thread-2) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 11-ea+14
   [junit4]   2> 1188411 INFO  (jetty-launcher-4322-thread-3) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 11-ea+14
   [junit4]   2> 1188411 INFO  (jetty-launcher-4322-thread-4) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 11-ea+14
   [junit4]   2> 1188411 INFO  (jetty-launcher-4322-thread-1) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 11-ea+14
   [junit4]   2> 1188423 INFO  (jetty-launcher-4322-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1188423 INFO  (jetty-launcher-4322-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1188423 INFO  (jetty-launcher-4322-thread-1) [] 
o.e.j.s.session 

[JENKINS] Lucene-Solr-repro - Build # 930 - Still Unstable

2018-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/930/

[...truncated 51 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/665/consoleText

[repro] Revision: a8a1cf8a88915eb42786e5e7d8a321130f67b689

[repro] Repro line:  ant test  -Dtestcase=CdcrBidirectionalTest 
-Dtests.method=testBiDir -Dtests.seed=FBD20284F364263D -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=is -Dtests.timezone=America/Resolute 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
a09f3facfc5d9d096661a2c45458fc5b55a07819
[repro] git fetch
[repro] git checkout a8a1cf8a88915eb42786e5e7d8a321130f67b689

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   CdcrBidirectionalTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CdcrBidirectionalTest" -Dtests.showOutput=onerror  
-Dtests.seed=FBD20284F364263D -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=is -Dtests.timezone=America/Resolute -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 2465 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
[repro] git checkout a09f3facfc5d9d096661a2c45458fc5b55a07819

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] lucene-solr pull request #414: SOLR-12007 When a SolrCore is closed, cleanup...

2018-07-06 Thread gaborkaszab
GitHub user gaborkaszab opened a pull request:

https://github.com/apache/lucene-solr/pull/414

SOLR-12007 When a SolrCore is closed, cleanupOldIndexDirectories is c…

…alled in a background thread

that will race with DirectoryFactory close.

Change-Id: I8b77fd1469a956f20245835e6db2819294c2b58a

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gaborkaszab/lucene-solr-1 SOLR-12007

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/414.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #414


commit 00ec34f0550ddbdb064807895fcedffd541afeb8
Author: Mark Miller 
Date:   2018-02-20T15:35:54Z

SOLR-12007 When a SolrCore is closed, cleanupOldIndexDirectories is called 
in a background thread
that will race with DirectoryFactory close.

Change-Id: I8b77fd1469a956f20245835e6db2819294c2b58a




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22397 - Unstable!

2018-07-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22397/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.testOldReplicaIsDeletedInRaceCondition

Error Message:
Error from server at http://127.0.0.1:3/solr: Could not fully remove 
collection: movereplicatest_coll4

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:3/solr: Could not fully remove collection: 
movereplicatest_coll4
at 
__randomizedtesting.SeedInfo.seed([6281FDFEED5E51C9:68D172882D5E3068]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.testOldReplicaIsDeletedInRaceCondition(MoveReplicaHDFSFailoverTest.java:195)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

[jira] [Commented] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights

2018-07-06 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534631#comment-16534631
 ] 

Alessandro Benedetti commented on LUCENE-8343:
--

Any update on this ? Can I help anyway with this to move forward ?

> BlendedInfixSuggester bad score calculus for certain suggestion weights
> ---
>
> Key: LUCENE-8343
> URL: https://issues.apache.org/jira/browse/LUCENE-8343
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the BlendedInfixSuggester return a (long) score to rank the 
> suggestions.
> This score is calculated as a multiplication between :
> long *Weight* : the suggestion weight, coming from a document field, it can 
> be any long value ( including 1, 0,.. )
> double *Coefficient* : 0<=x<=1, calculated based on the position match, 
> earlier the better
> The resulting score is a long, which means that at the moment, any weight<10 
> can bring inconsistencies.
> *Edge cases* 
> Weight =1
> Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for 
> any other match)
> Weight =0
> Score = 0 ( independently of the position match coefficient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-06 Thread changchun huang (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534619#comment-16534619
 ] 

changchun huang commented on LUCENE-8389:
-

As we can not just limit Lucene's memory consuption besides JAVA Thread with 
configuration or cgroup. This is need help

> Could not limit Lucene's memory consumption
> ---
>
> Key: LUCENE-8389
> URL: https://issues.apache.org/jira/browse/LUCENE-8389
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
> Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
>Reporter: changchun huang
>Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8387) Add IndexSearcher.getSlices

2018-07-06 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534612#comment-16534612
 ] 

Michael McCandless commented on LUCENE-8387:


I think making the member private and adding a getter is a clean approach?  
I'll make a new patch.

> Add IndexSearcher.getSlices
> ---
>
> Key: LUCENE-8387
> URL: https://issues.apache.org/jira/browse/LUCENE-8387
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Major
> Attachments: LUCENE-8387.patch
>
>
> When you pass an executor to {{IndexSearcher}}, it creates a {{LeafSlice[]}} 
> slices, by default once slice per leaf, but a subclass can override.  It's 
> helpful to later be able to get those slices e.g. if you want to do your own 
> concurrent per-slice processing.
> This patch will just add a getter to {{IndexSearcher}}, and make the 
> {{LeafSlice.leaves}} member public.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-06 Thread changchun huang (JIRA)
changchun huang created LUCENE-8389:
---

 Summary: Could not limit Lucene's memory consumption
 Key: LUCENE-8389
 URL: https://issues.apache.org/jira/browse/LUCENE-8389
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.3
 Environment: |Java Version|1.8.0_102|
|Operating System|Linux 3.12.48-52.27-default|
|Application Server Container|Apache Tomcat/8.5.6|
|atabase JNDI address|mysql 
jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
|Database version|5.6.27|
|abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
|Version|7.6.1|
Reporter: changchun huang


We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1

We configured 16GB Jira heap on 64GB server

However, each time, when we run background re-index, the memory will be used 
out by Lucene and we could not only limit its memory consumption.

This definitely will cause overall performance issue on a system with heavy 
load.

We have around 500 concurrent users, 400K issues.

Could you please help to advice if there were workaround  or fix for this?

Thanks.

 

BTW: I did check a lot and found a blog introducing the new behavior of Lucene 
3.3

[http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11764) preanalyzed field with highlight option throws exception

2018-07-06 Thread Gianpiero Sportelli (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534600#comment-16534600
 ] 

Gianpiero Sportelli commented on SOLR-11764:


Yes, your solution work.

We set {{}}

{{Thank you}}

> preanalyzed field with highlight option throws exception
> 
>
> Key: SOLR-11764
> URL: https://issues.apache.org/jira/browse/SOLR-11764
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Affects Versions: 6.4
>Reporter: Selvam Raman
>Priority: Major
>
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Error from server at http://localhost:8983/solr/Metadata2: 
> org.apache.solr.client.solrj.SolrServerException:
> No live SolrServers available to handle this 
> request:[/solr/Metadata2_shard1_replica1,
>   solr/Metadata2_shard2_replica2, 
>   solr/Metadata2_shard1_replica2]
> When i look at the solr logs i find the below exception
> Caused by: java.io.IOException: Invalid JSON type java.lang.String, expected 
> Map
>   at 
> org.apache.solr.schema.JsonPreAnalyzedParser.parse(JsonPreAnalyzedParser.java:86)
>   at 
> org.apache.solr.schema.PreAnalyzedField$PreAnalyzedTokenizer.decodeInput(PreAnalyzedField.java:345)
>   at 
> org.apache.solr.schema.PreAnalyzedField$PreAnalyzedTokenizer.access$000(PreAnalyzedField.java:280)
>   at 
> org.apache.solr.schema.PreAnalyzedField$PreAnalyzedAnalyzer$1.setReader(PreAnalyzedField.java:375)
>   at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:202)
>   at 
> org.apache.lucene.search.uhighlight.AnalysisOffsetStrategy.tokenStream(AnalysisOffsetStrategy.java:58)
>   at 
> org.apache.lucene.search.uhighlight.MemoryIndexOffsetStrategy.getOffsetsEnums(MemoryIndexOffsetStrategy.java:106)
>   ... 37 more
>  I am setting up lot of fields (fq, score, highlight,etc) then put it into 
> solrquery.
> "we are using preanalyzed field and that causing the problem. 
> The actual problem is preanalyzed with highlight option. if i disable 
> highlight option it works fine."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2018-07-06 Thread Mano Kovacs (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534584#comment-16534584
 ] 

Mano Kovacs commented on SOLR-10783:


[~jafurrer], it seems like it. The reason it is printed out is because SolrCLI 
also initializes SSL to talk to Solr instances and it has no log4j config. So 
basically that is not Solr printing it out, but SolrCLI. Sorry for missing 
that, I'll prepare a fix.

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.0
>Reporter: Mano Kovacs
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11417) Crashed leader's hanging emphemral will make restarting followers stuck in recovering

2018-07-06 Thread Mano Kovacs (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mano Kovacs updated SOLR-11417:
---
Fix Version/s: 7.3

> Crashed leader's hanging emphemral will make restarting followers stuck in 
> recovering
> -
>
> Key: SOLR-11417
> URL: https://issues.apache.org/jira/browse/SOLR-11417
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.3
>Reporter: Mano Kovacs
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-11417.png
>
>
> If replicas are starting up after leader crash and within the ZK session 
> timeout, replicas
> * will lose leader election due to hanging ephemerals
> * will read stale data from ZK about current leader
> * will fail recovery and stuck in recovering state
> If leader is down permanently (eg. hardware failure) and all replicas are 
> affected, shard will not come up (see also SOLR-7065).
> Tested on 6.3. See attached image for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11417) Crashed leader's hanging emphemral will make restarting followers stuck in recovering

2018-07-06 Thread Mano Kovacs (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mano Kovacs resolved SOLR-11417.

Resolution: Fixed

Fixed by SOLR-12011

> Crashed leader's hanging emphemral will make restarting followers stuck in 
> recovering
> -
>
> Key: SOLR-11417
> URL: https://issues.apache.org/jira/browse/SOLR-11417
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.3
>Reporter: Mano Kovacs
>Priority: Major
> Attachments: SOLR-11417.png
>
>
> If replicas are starting up after leader crash and within the ZK session 
> timeout, replicas
> * will lose leader election due to hanging ephemerals
> * will read stale data from ZK about current leader
> * will fail recovery and stuck in recovering state
> If leader is down permanently (eg. hardware failure) and all replicas are 
> affected, shard will not come up (see also SOLR-7065).
> Tested on 6.3. See attached image for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8388) Deprecate and remove PostingsEnum#attributes()

2018-07-06 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-8388:
-

 Summary: Deprecate and remove PostingsEnum#attributes()
 Key: LUCENE-8388
 URL: https://issues.apache.org/jira/browse/LUCENE-8388
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Alan Woodward
Assignee: Alan Woodward


This method isn't used anywhere in the codebase, and seems to be entirely 
useless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8306) Allow iteration over the term positions of a Match

2018-07-06 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534564#comment-16534564
 ] 

Alan Woodward commented on LUCENE-8306:
---

I've been playing around with various options for this API, and I think the one 
that fits best with highlighters is to add another method to Matches that 
returns a PostingsEnum across all term matches for a particular field.  
Highlighters can call {{Matches.getMatches(field)}} to get an iterator over 
intervals, which will allow them to decide how to build passages, and then 
{{Matches.getTermMatches(field)}} to get the individual term matches - this 
would also allow for exposing term frequencies for scoring, payloads, etc.

I'm not sure yet whether or not to return a TermPostingsEnum or just a plain 
PostingsEnum - the latter keeps the API surface low, but the former could be 
useful for passage scoring.

> Allow iteration over the term positions of a Match
> --
>
> Key: LUCENE-8306
> URL: https://issues.apache.org/jira/browse/LUCENE-8306
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8306.patch, LUCENE-8306.patch
>
>
> For multi-term queries such as phrase queries, the matches API currently just 
> returns information about the span of the whole match.  It would be useful to 
> also expose information about the matching terms within the phrase.  The same 
> would apply to Spans and Interval queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 91 - Still Unstable

2018-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/91/

4 tests failed.
FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testBasics

Error Message:
Error from server at 
http://127.0.0.1:59685/solr/testcollection_shard1_replica_n2: Expected mime 
type application/octet-stream but got text/html.Error 404 
Can not find: /solr/testcollection_shard1_replica_n2/update  
HTTP ERROR 404 Problem accessing 
/solr/testcollection_shard1_replica_n2/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:59685/solr/testcollection_shard1_replica_n2: 
Expected mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/testcollection_shard1_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason:
Can not find: 
/solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([EC0E8466C13A747D:D1D62A4AF9D42A0D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.TestAuthenticationFramework.collectionCreateSearchDeleteTwice(TestAuthenticationFramework.java:127)
at 
org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-BadApples-7.x-Linux (32bit/jdk1.8.0_172) - Build # 60 - Still Unstable!

2018-07-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/60/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseSerialGC

13 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.ShardSplitTest.test

Error Message:
Wrong doc count on shard1_0. See SOLR-5309 expected:<199> but was:<202>

Stack Trace:
java.lang.AssertionError: Wrong doc count on shard1_0. See SOLR-5309 
expected:<199> but was:<202>
at 
__randomizedtesting.SeedInfo.seed([D68A1862C63B1F58:5EDE27B868C772A0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:968)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:751)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.test(ShardSplitTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_172) - Build # 674 - Unstable!

2018-07-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/674/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeaderAfterRestart

Error Message:
Timeout waiting for 1x3 collection null Live Nodes: [127.0.0.1:55127_solr, 
127.0.0.1:55143_solr, 127.0.0.1:55159_solr, 127.0.0.1:55175_solr] Last 
available state: 
DocCollection(outOfSyncReplicasCannotBecomeLeader-true//collections/outOfSyncReplicasCannotBecomeLeader-true/state.json/9)={
   "pullReplicas":"0",   "replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node62":{   
"core":"outOfSyncReplicasCannotBecomeLeader-true_shard1_replica_n61",   
"base_url":"https://127.0.0.1:55127/solr;,   
"node_name":"127.0.0.1:55127_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node64":{   
"core":"outOfSyncReplicasCannotBecomeLeader-true_shard1_replica_n63",   
"base_url":"https://127.0.0.1:55143/solr;,   
"node_name":"127.0.0.1:55143_solr",   "state":"recovering",   
"type":"NRT",   "force_set_state":"false"}, "core_node66":{ 
  "core":"outOfSyncReplicasCannotBecomeLeader-true_shard1_replica_n65", 
  "base_url":"https://127.0.0.1:55159/solr;,   
"node_name":"127.0.0.1:55159_solr",   "state":"recovering",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for 1x3 collection
null
Live Nodes: [127.0.0.1:55127_solr, 127.0.0.1:55143_solr, 127.0.0.1:55159_solr, 
127.0.0.1:55175_solr]
Last available state: 
DocCollection(outOfSyncReplicasCannotBecomeLeader-true//collections/outOfSyncReplicasCannotBecomeLeader-true/state.json/9)={
  "pullReplicas":"0",
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node62":{
  "core":"outOfSyncReplicasCannotBecomeLeader-true_shard1_replica_n61",
  "base_url":"https://127.0.0.1:55127/solr;,
  "node_name":"127.0.0.1:55127_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node64":{
  "core":"outOfSyncReplicasCannotBecomeLeader-true_shard1_replica_n63",
  "base_url":"https://127.0.0.1:55143/solr;,
  "node_name":"127.0.0.1:55143_solr",
  "state":"recovering",
  "type":"NRT",
  "force_set_state":"false"},
"core_node66":{
  "core":"outOfSyncReplicasCannotBecomeLeader-true_shard1_replica_n65",
  "base_url":"https://127.0.0.1:55159/solr;,
  "node_name":"127.0.0.1:55159_solr",
  "state":"recovering",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([DB1DE3EF6C5522A3:F38871BABFD8F3F8]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:278)
at 
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader(TestCloudConsistency.java:118)
at 
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeaderAfterRestart(TestCloudConsistency.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at