[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 19 - Still Failing

2019-02-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/19/

No tests ran.

Build Log:
[...truncated 23466 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2476 links (2022 relative) to 3299 anchors in 249 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: 

[jira] [Commented] (SOLR-13237) Not al types of index corruption garuntee a leader will "give up its leadership"

2019-02-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764072#comment-16764072
 ] 

Hoss Man commented on SOLR-13237:
-

Also interesting: beasting hasn't turned up many (any?) seeds that reproduce 
(on any branch) _except_ when using {{-Dtests.nightly=true}} ... but the test 
doesn't do anything dependent on TEST_NIGHTLY, (or use {{usually()}} or 
{{atLeast()}}, etc...) making me suspect that maybe the compounding factor is 
something chosen at the randomized index config level? perhaps something 
realting to the codecs?

need to investigate more.

> Not al types of index corruption garuntee a leader will "give up its 
> leadership"
> 
>
> Key: SOLR-13237
> URL: https://issues.apache.org/jira/browse/SOLR-13237
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13237_logging.patch, log-fail-5D803D4699663918.txt, 
> log-fail-DEADBEEF.txt, log-pass-BEEFBEEF.txt, log-pass-FEEDBEEF.txt
>
>
> While investigating failures from LeaderTragicEventTest, I've found some 
> reproducible situations where (externally introduced) index corruption can 
> cause a leader to reject updates, but not automatically give up it's 
> leadership.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8671) Add setting for moving FST offheap/onheap

2019-02-08 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764055#comment-16764055
 ] 

Ankit Jain commented on LUCENE-8671:


Hi David,

Thanks for the feedback.
{quote}Modifying FieldInfo feels wrong to me.  This is a setting that could 
only apply to a subset of our PostingsFormat implementations.  It's not 
fundamental to the metadata FieldInfo tracks.  I'd prefer a more general 
per-field name=value setting approach{quote}
I have added more generic reader settings map to FieldInfo in 
[^offheap_generic_settings.patch] that can be used for other purposes as well.

{quote}There are plenty of other settings to our postings formats that don't 
get such 1st class treatment. It's true that it's not "easy" to make these 
low-level settings changes but this doesn't feel like the right way. {quote}
Just for my understanding, since I'm pretty new, can you give example of some 
of those settings?

Thanks
Ankit



> Add setting for moving FST offheap/onheap
> -
>
> Key: LUCENE-8671
> URL: https://issues.apache.org/jira/browse/LUCENE-8671
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs, core/store
>Reporter: Ankit Jain
>Priority: Minor
> Attachments: offheap_generic_settings.patch, offheap_settings.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> While LUCENE-8635, adds support for loading FST offheap using mmap, users do 
> not have the  flexibility to specify fields for which FST needs to be 
> offheap. This allows users to tune heap usage as per their workload.
> Ideal way will be to add an attribute to FieldInfo, where we have 
> put/getAttribute. Then FieldReader can inspect the FieldInfo and pass the 
> appropriate On/OffHeapStore when creating its FST. It can support special 
> keywords like ALL/NONE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8671) Add setting for moving FST offheap/onheap

2019-02-08 Thread Ankit Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Jain updated LUCENE-8671:
---
Attachment: offheap_generic_settings.patch

> Add setting for moving FST offheap/onheap
> -
>
> Key: LUCENE-8671
> URL: https://issues.apache.org/jira/browse/LUCENE-8671
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs, core/store
>Reporter: Ankit Jain
>Priority: Minor
> Attachments: offheap_generic_settings.patch, offheap_settings.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> While LUCENE-8635, adds support for loading FST offheap using mmap, users do 
> not have the  flexibility to specify fields for which FST needs to be 
> offheap. This allows users to tune heap usage as per their workload.
> Ideal way will be to add an attribute to FieldInfo, where we have 
> put/getAttribute. Then FieldReader can inspect the FieldInfo and pass the 
> appropriate On/OffHeapStore when creating its FST. It can support special 
> keywords like ALL/NONE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-02-08 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764051#comment-16764051
 ] 

Ankit Jain commented on LUCENE-8635:


{quote}Ankit Jain that's strange yeah – this patch was supposed to avoid 
kicking in for PK fields right?{quote}
[~sokolov] - Yeah, not sure what's going on. Will be great if someone can 
review the changes, in case I missed something.

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, fst-offheap-rev.patch, 
> offheap.patch, optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.7 - Build # 7 - Failure

2019-02-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.7/7/

22 tests failed.
FAILED:  org.apache.solr.core.OpenCloseCoreStressTest.test15Seconds

Error Message:
Core 5_core bad! expected:<1328> but was:<906>

Stack Trace:
java.lang.AssertionError: Core 5_core bad! expected:<1328> but was:<906>
at 
__randomizedtesting.SeedInfo.seed([CD9376F4F7291608:37A926ACE74455F4]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.core.OpenCloseCoreStressTest.checkResults(OpenCloseCoreStressTest.java:280)
at 
org.apache.solr.core.OpenCloseCoreStressTest.doStress(OpenCloseCoreStressTest.java:179)
at 
org.apache.solr.core.OpenCloseCoreStressTest.test15Seconds(OpenCloseCoreStressTest.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  

[jira] [Created] (SOLR-13237) Not al types of index corruption garuntee a leader will "give up its leadership"

2019-02-08 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13237:
---

 Summary: Not al types of index corruption garuntee a leader will 
"give up its leadership"
 Key: SOLR-13237
 URL: https://issues.apache.org/jira/browse/SOLR-13237
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


While investigating failures from LeaderTragicEventTest, I've found some 
reproducible situations where (externally introduced) index corruption can 
cause a leader to reject updates, but not automatically give up it's leadership.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 451 - Still Unstable

2019-02-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/451/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=5947, 
name=testExecutor-1851-thread-6, state=RUNNABLE, 
group=TGRP-HdfsUnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=5947, name=testExecutor-1851-thread-6, 
state=RUNNABLE, group=TGRP-HdfsUnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:44669/u
at __randomizedtesting.SeedInfo.seed([E637C0C68C28D8FD]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCollectionInOneInstance$1(BasicDistributedZkTest.java:659)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:44669/u
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCollectionInOneInstance$1(BasicDistributedZkTest.java:657)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:542)
... 9 more




Build Log:
[...truncated 13388 lines...]
   [junit4] Suite: org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest
   [junit4]   2> 939598 INFO  
(SUITE-HdfsUnloadDistributedZkTest-seed#[E637C0C68C28D8FD]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsUnloadDistributedZkTest_E637C0C68C28D8FD-001/init-core-data-001
   [junit4]   2> 939599 WARN  
(SUITE-HdfsUnloadDistributedZkTest-seed#[E637C0C68C28D8FD]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=11 numCloses=11
   [junit4]   2> 939599 INFO  
(SUITE-HdfsUnloadDistributedZkTest-seed#[E637C0C68C28D8FD]-worker) 

[jira] [Updated] (SOLR-13229) ZkController.giveupLeadership should cleanup the replicasMetTragicEvent map after all exceptions

2019-02-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-13229:
-
Issue Type: Bug  (was: Improvement)

> ZkController.giveupLeadership should cleanup the replicasMetTragicEvent map 
> after all exceptions
> 
>
> Key: SOLR-13229
> URL: https://issues.apache.org/jira/browse/SOLR-13229
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently {{ZkController.giveupLeadership}} cleans up the 
> {{replicasMetTragicEvent}} after {{Keeper|Interrupted Exceptions}}, all other 
> exceptions should also cleanup



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-02-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved LUCENE-8662.
---
   Resolution: Fixed
Fix Version/s: (was: 7.7)

> Change TermsEnum.seekExact(BytesRef) to abstract + delegate 
> seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum
> ---
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0
>
> Attachments: output of test program.txt
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Solr uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763954#comment-16763954
 ] 

ASF subversion and git services commented on LUCENE-8662:
-

Commit 970c74d1bbde6dfe3e3f6e01fccc644c570eda21 in lucene-solr's branch 
refs/heads/branch_8_0 from yyuan2
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=970c74d ]

LUCENE-8662: Change TermsEnum.seekExact(BytesRef) to abstract


> Change TermsEnum.seekExact(BytesRef) to abstract + delegate 
> seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum
> ---
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Solr uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763953#comment-16763953
 ] 

ASF subversion and git services commented on LUCENE-8662:
-

Commit d60b1e4ee0b2ddf45277523bd60731621b82a211 in lucene-solr's branch 
refs/heads/branch_8x from yyuan2
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d60b1e4e ]

LUCENE-8662: Change TermsEnum.seekExact(BytesRef) to abstract


> Change TermsEnum.seekExact(BytesRef) to abstract + delegate 
> seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum
> ---
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Solr uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13236) numerous problems with LIROnShardRestartTest

2019-02-08 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-13236.
-
Resolution: Won't Fix

> numerous problems with LIROnShardRestartTest
> 
>
> Key: SOLR-13236
> URL: https://issues.apache.org/jira/browse/SOLR-13236
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> LIROnShardRestartTest is a frequent cause of jenkins failures -- but only on 
> the 7x jenkins jobs, because it was removed from master/8x as part of 
> SOLR-11812 since the underlying implementation being tested was deprecated 
> and removed in 8x.
> I spent some time looking into trying to fix this test, but the amount of 
> work it appears it would take to fix doesn't seem worth the effort given it's 
> deprecated status.  so i'm filing this issue purely for tracking purposes 
> with the plan to disable the test and resolve this jira as "Won't Fix" -- if 
> anyone else is intereste in working on it they can feel free to re-open



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763950#comment-16763950
 ] 

ASF subversion and git services commented on LUCENE-8662:
-

Commit a3a4ecd80b062d7567f4092fd43feb3e3f521333 in lucene-solr's branch 
refs/heads/master from yyuan2
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a3a4ecd ]

LUCENE-8662: Change TermsEnum.seekExact(BytesRef) to abstract


> Change TermsEnum.seekExact(BytesRef) to abstract + delegate 
> seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum
> ---
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Solr uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] tflobbe merged pull request #551: LUCENE-8662: Change TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-02-08 Thread GitBox
tflobbe merged pull request #551: LUCENE-8662: Change 
TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in 
FilterLeafReader.FilterTermsEnum
URL: https://github.com/apache/lucene-solr/pull/551
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13236) numerous problems with LIROnShardRestartTest

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763947#comment-16763947
 ] 

ASF subversion and git services commented on SOLR-13236:


Commit 0bad38439d0e64aaf80353eaa54282c9a0879718 in lucene-solr's branch 
refs/heads/branch_7_7 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0bad384 ]

SOLR-13236: AwaitsFix problematic (and deprecated) LIROnShardRestartTest

(cherry picked from commit 2714bb31066c4c66b1ab19dc7e74fa9ec3508f76)


> numerous problems with LIROnShardRestartTest
> 
>
> Key: SOLR-13236
> URL: https://issues.apache.org/jira/browse/SOLR-13236
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> LIROnShardRestartTest is a frequent cause of jenkins failures -- but only on 
> the 7x jenkins jobs, because it was removed from master/8x as part of 
> SOLR-11812 since the underlying implementation being tested was deprecated 
> and removed in 8x.
> I spent some time looking into trying to fix this test, but the amount of 
> work it appears it would take to fix doesn't seem worth the effort given it's 
> deprecated status.  so i'm filing this issue purely for tracking purposes 
> with the plan to disable the test and resolve this jira as "Won't Fix" -- if 
> anyone else is intereste in working on it they can feel free to re-open



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13236) numerous problems with LIROnShardRestartTest

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763948#comment-16763948
 ] 

ASF subversion and git services commented on SOLR-13236:


Commit 2714bb31066c4c66b1ab19dc7e74fa9ec3508f76 in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2714bb3 ]

SOLR-13236: AwaitsFix problematic (and deprecated) LIROnShardRestartTest


> numerous problems with LIROnShardRestartTest
> 
>
> Key: SOLR-13236
> URL: https://issues.apache.org/jira/browse/SOLR-13236
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> LIROnShardRestartTest is a frequent cause of jenkins failures -- but only on 
> the 7x jenkins jobs, because it was removed from master/8x as part of 
> SOLR-11812 since the underlying implementation being tested was deprecated 
> and removed in 8x.
> I spent some time looking into trying to fix this test, but the amount of 
> work it appears it would take to fix doesn't seem worth the effort given it's 
> deprecated status.  so i'm filing this issue purely for tracking purposes 
> with the plan to disable the test and resolve this jira as "Won't Fix" -- if 
> anyone else is intereste in working on it they can feel free to re-open



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13236) numerous problems with LIROnShardRestartTest

2019-02-08 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13236:
---

 Summary: numerous problems with LIROnShardRestartTest
 Key: SOLR-13236
 URL: https://issues.apache.org/jira/browse/SOLR-13236
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


LIROnShardRestartTest is a frequent cause of jenkins failures -- but only on 
the 7x jenkins jobs, because it was removed from master/8x as part of 
SOLR-11812 since the underlying implementation being tested was deprecated and 
removed in 8x.

I spent some time looking into trying to fix this test, but the amount of work 
it appears it would take to fix doesn't seem worth the effort given it's 
deprecated status.  so i'm filing this issue purely for tracking purposes with 
the plan to disable the test and resolve this jira as "Won't Fix" -- if anyone 
else is intereste in working on it they can feel free to re-open



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13236) numerous problems with LIROnShardRestartTest

2019-02-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763937#comment-16763937
 ] 

Hoss Man commented on SOLR-13236:
-

Examples of some of the types of failures i've observed in jenkins logs...




This error occurs inside of a catch block while trying to log some info about 
the state of hte election when the Error/Exception happened.  The original 
exception is completely lost in the logs because of this 
IllegalArgumentException, which arises from calling zkClient().getChildren() on 
the hardcoded string 
{{"/collections/allReplicasInLIR/leader_elect/shard1/election/"}} -- which as 
the error indicates is completley illegal, and indicates that this code path 
was never sanity checked when the test was written.

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=LIROnShardRestartTest -Dtests.method=testAllReplicasInLIR 
-Dtests.seed=10B31070AB4A4496 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sv-SE -Dtests.timezone=Africa/Lusaka -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR144s J2 | LIROnShardRestartTest.testAllReplicasInLIR <<<
   [junit4]> Throwable #1: java.lang.IllegalArgumentException: Path must 
not end with / character
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([10B31070AB4A4496:4A2B2AB6D5CA2371]:0)
   [junit4]>at 
org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:58)
   [junit4]>at 
org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1523)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.lambda$getChildren$4(SolrZkClient.java:346)
   [junit4]>at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:71)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:346)
   [junit4]>at 
org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR(LIROnShardRestartTest.java:168)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
 {noformat}

This is a failure in the last line of the test, after all assertions ahve 
passed, to delete the collection -- i believe because the checks that " waiting 
for replicas rejoin election" doesn't first wait to see all the nodes 
disconnected from jetty and be marged "down" -- so the election may not have 
even happened yet by the time the test finishes, it may just be getting to the 
point where all the solr nodes are marked "down" when it tries to clean up...

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=LIROnShardRestartTest -Dtests.method=testAllReplicasInLIR 
-Dtests.seed=10B31070AB4A4496 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sv-SE -Dtests.timezone=Africa/Lusaka -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   94.6s J1 | LIROnShardRestartTest.testAllReplicasInLIR <<<
   [junit4]> Throwable #1: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([10B31070AB4A4496:4A2B2AB6D5CA2371]:0)
   [junit4]>at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:461)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
   [junit4]>at 
org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR(LIROnShardRestartTest.java:175)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}


This is a (similar) failure in the first line of another test method to create 
the collection it wants to use, which can happen if the former test fails (or 
passes) and the next test method is started before all the nodes have a chance 
to re-connect to zk...

{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=LIROnShardRestartTest -Dtests.method=testSeveralReplicasInLIR 
-Dtests.seed=10B31070AB4A4496 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 

[jira] [Commented] (LUCENE-8638) Remove deprecated code in master

2019-02-08 Thread Nikolay Khitrin (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763915#comment-16763915
 ] 

Nikolay Khitrin commented on LUCENE-8638:
-

[~romseygeek], I've tried to remove JavaCC generated deprecations by ant 
regexps as a part of backporting current code changes to ant javacc tasks 
(LUCENE-8684 for deprecations, LUCENE-8683 for backports).

It isn't best way to do it, but there is a lot of ant-based replacements in 
generated code already present in our build files.

> Remove deprecated code in master
> 
>
> Key: LUCENE-8638
> URL: https://issues.apache.org/jira/browse/LUCENE-8638
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: master (9.0)
>
>
> There are a number of deprecations in master that should be removed. This 
> issue is to keep track of deprecations as a whole, some individual 
> deprecations may require their own issues.
>  
> Work on this issue should be pushed to the `master-deprecations` branch on 
> gitbox



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13235) Split Ref Guide Collections API page into several sub-pages

2019-02-08 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-13235:
-
Description: 
The Collections API page in the Ref Guide has become the de-facto place where 
information about how to work with Solr collections is stored, but it is so 
huge with API examples that information gets lost.

I did some work a couple months ago to split this up, and came up with this 
approach to splitting up the content:

* *Cluster and Node Management*: Define properties for the entire cluster; 
check the status of a cluster; remove replicas from a node; utilize a newly 
added node; add or remove roles for a node.
* *Collection Management*: Create, list, reload and delete collections; set 
collection properties; migrate documents to another collection; rebalance 
leaders; backup and restore collections.
* *Collection Aliasing*: Create, list or delete collection aliases; set alias 
properties.
* *Shard Management*: Create and delete a shard; split a shard into two or more 
additional shards; force a shard leader.
* *Replica Management*: Add or delete a replica; set replica properties; move a 
replica to a different node.

My existing local WIP leaves info on Async commands on the main 
collections-api.adoc page, but creates new pages for each of the bullets 
mentioned above, and moves the related API calls to those pages. Each topic 
will be smaller and easier for us to manage on an ongoing basis.

Since I did the work a while ago, I need to bring it up to date with master, so 
a patch & a branch with this work will be forthcoming shortly.

  was:
The Collections API page has become the de-facto place where information about 
how to work with Solr collections is stored, but it is so huge with API 
examples that information gets lost.

I did some work a couple months ago to split this up, and came up with this 
approach to splitting up the content:

* *Cluster and Node Management*: Define properties for the entire cluster; 
check the status of a cluster; remove replicas from a node; utilize a newly 
added node; add or remove roles for a node.
* *Collection Management*: Create, list, reload and delete collections; set 
collection properties; migrate documents to another collection; rebalance 
leaders; backup and restore collections.
* *Collection Aliasing*: Create, list or delete collection aliases; set alias 
properties.
* *Shard Management*: Create and delete a shard; split a shard into two or more 
additional shards; force a shard leader.
* *Replica Management*: Add or delete a replica; set replica properties; move a 
replica to a different node.

My existing local WIP leaves info on Async commands on the main 
collections-api.adoc page, but creates new pages for each of the bullets 
mentioned above, and moves the related API calls to those pages. Each topic 
will be smaller and easier for us to manage on an ongoing basis.

Since I did the work a while ago, I need to bring it up to date with master, so 
a patch & a branch with this work will be forthcoming shortly.


> Split Ref Guide Collections API page into several sub-pages
> ---
>
> Key: SOLR-13235
> URL: https://issues.apache.org/jira/browse/SOLR-13235
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 8.0, master (9.0)
>
>
> The Collections API page in the Ref Guide has become the de-facto place where 
> information about how to work with Solr collections is stored, but it is so 
> huge with API examples that information gets lost.
> I did some work a couple months ago to split this up, and came up with this 
> approach to splitting up the content:
> * *Cluster and Node Management*: Define properties for the entire cluster; 
> check the status of a cluster; remove replicas from a node; utilize a newly 
> added node; add or remove roles for a node.
> * *Collection Management*: Create, list, reload and delete collections; set 
> collection properties; migrate documents to another collection; rebalance 
> leaders; backup and restore collections.
> * *Collection Aliasing*: Create, list or delete collection aliases; set alias 
> properties.
> * *Shard Management*: Create and delete a shard; split a shard into two or 
> more additional shards; force a shard leader.
> * *Replica Management*: Add or delete a replica; set replica properties; move 
> a replica to a different node.
> My existing local WIP leaves info on Async commands on the main 
> collections-api.adoc page, but creates new pages for each of the bullets 
> mentioned above, and moves the related API calls to those pages. Each topic 
> 

[jira] [Created] (SOLR-13235) Split Ref Guide Collections API page into several sub-pages

2019-02-08 Thread Cassandra Targett (JIRA)
Cassandra Targett created SOLR-13235:


 Summary: Split Ref Guide Collections API page into several 
sub-pages
 Key: SOLR-13235
 URL: https://issues.apache.org/jira/browse/SOLR-13235
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Cassandra Targett
Assignee: Cassandra Targett
 Fix For: 8.0, master (9.0)


The Collections API page has become the de-facto place where information about 
how to work with Solr collections is stored, but it is so huge with API 
examples that information gets lost.

I did some work a couple months ago to split this up, and came up with this 
approach to splitting up the content:

* *Cluster and Node Management*: Define properties for the entire cluster; 
check the status of a cluster; remove replicas from a node; utilize a newly 
added node; add or remove roles for a node.
* *Collection Management*: Create, list, reload and delete collections; set 
collection properties; migrate documents to another collection; rebalance 
leaders; backup and restore collections.
* *Collection Aliasing*: Create, list or delete collection aliases; set alias 
properties.
* *Shard Management*: Create and delete a shard; split a shard into two or more 
additional shards; force a shard leader.
* *Replica Management*: Add or delete a replica; set replica properties; move a 
replica to a different node.

My existing local WIP leaves info on Async commands on the main 
collections-api.adoc page, but creates new pages for each of the bullets 
mentioned above, and moves the related API calls to those pages. Each topic 
will be smaller and easier for us to manage on an ongoing basis.

Since I did the work a while ago, I need to bring it up to date with master, so 
a patch & a branch with this work will be forthcoming shortly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Some questions about index hard commit and intellij dev setup

2019-02-08 Thread Huaxiang Sun
Hi Developers,

I am a newbie to solr/lucene project and have some questions about
index hard commit. Excuse me if these have been asked before.

1. When hard commit happens, will it drain up entries in the index
queue?

2. How exactly is index file written? I.e, will they be written to tmp
dir and moved to the index dir when it completes, or they are written to
the index dir directly. In the later case, if one is reading the index dir,
then it can read incomplete index files.

   3. Similar question to index merge. Will the merge process create merged
file in tmp dir and moved to index dir after merge completes? When are
these files merged deleted? Will these merged files be moved to some
archive dir and cleaned up later or deleted right after the merge?

   The final question is about intellij setup for lucene/solr project. I
followed the steps in doc and it seems that the code browsing/build does
not work well for me. Just want to check that these are steps I need to
follow.

   Thanks

Huaxiang Sun


[jira] [Commented] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-02-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763841#comment-16763841
 ] 

Tomás Fernández Löbbe commented on LUCENE-8662:
---

I'll merge today in the interest of getting some more Jenkins time, given that 
[~romseygeek] plans to start the RC early next week. [~dsmiley], since this is 
ready to go, I'll leave the discussion over `termState` to continue in 
LUCENE-8292, no reason to hold this one.

> Change TermsEnum.seekExact(BytesRef) to abstract + delegate 
> seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum
> ---
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Solr uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.7-Linux (64bit/jdk-11) - Build # 151 - Unstable!

2019-02-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.7-Linux/151/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testGammaDistribution

Error Message:
0.7785105263767454 0.7890770632831529

Stack Trace:
java.lang.AssertionError: 0.7785105263767454 0.7890770632831529
at 
__randomizedtesting.SeedInfo.seed([6F71FFF1FCE05152:520BD45FDF98FB45]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testGammaDistribution(MathExpressionTest.java:4446)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)




Build Log:
[...truncated 16426 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.MathExpressionTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods

2019-02-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763797#comment-16763797
 ] 

David Smiley commented on LUCENE-8292:
--

Adrien last said:
{quote}In general, we do not delegate methods that have a default 
implementation because the default implementation is correct regardless of what 
the wrapper class does. Overriding these methods in FilterTermsEnum to delegate 
to the wrapped instance would make room for bugs by requiring more methods to 
be overridden for the wrapper to be correct.
{quote}
CC [~simonw] curious about your thoughts too

_In general_ I can see this.  For the case of termState() and 
seekExact(termState) in particular, I don't.  Hypothetically what could go 
wrong if FilterTermsEnum delegated?  When I think of filtering a TermsEnum, I 
think of something that might match a subset of terms from the underlying 
TermsEnum.  The TermsEnum must be positioned to something that matches when 
termState is called.  If seekExact(termState) in the underlying TermsEnum 
receives some termState impl it doesn't identify (isn't of a class it knows), 
then you get the default functional behavior, which is safe.  I'm looking at 
the 6 subclasses of FilterTermsEnum we have in Lucene and I don't see an issue. 
 (Interestingly, 3 of them are in the UnifiedHighlighter).  I checked that 
delegating these two methods doesn't result in test failures too, aside from 
TestFilterLeafReader.testOverrideMethods which expressly tests our policy.

> Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
> --
>
> Key: LUCENE-8292
> URL: https://issues.apache.org/jira/browse/LUCENE-8292
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.2.1
>Reporter: Bruno Roustant
>Priority: Major
> Fix For: trunk
>
> Attachments: 
> 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, 
> LUCENE-8292.patch
>
>
> FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many 
> methods.
> It misses some seekExact() methods, thus it is not possible to the delegate 
> to override these methods to have specific behavior (unlike the TermsEnum API 
> which allows that).
> The fix is straightforward: simply override these seekExact() methods and 
> delegate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-02-08 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763762#comment-16763762
 ] 

Simon Willnauer commented on LUCENE-8662:
-

[~tomasflobbe] yes I think this should go into 8.0 - feel free to pull it in, I 
will do it next week once I am back at the keyboard.

> Change TermsEnum.seekExact(BytesRef) to abstract + delegate 
> seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum
> ---
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Solr uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8688) Forced merges merge more than necessary

2019-02-08 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763748#comment-16763748
 ] 

Erick Erickson commented on LUCENE-8688:


Yes, certainly introduced in LUCENE-7976.

Hmmm. Going largely from memory since I'm on vacation.. Are you saying that 
when the number of segments is specified, we're merging and re-merging the same 
data? I.e. merging 30 segments (maxMergeAtOnceExplicit) into one segment, then 
merging _that_ segment later because it's still relatively small?

Or are close-to-the-new-max segment size with no deleted docs being merged with 
small segments? Which would be pretty wasteful...

I pretty much blindly let the merge scoring algorithm do its thing without 
special handling for this case other than to compute the theoretical segment 
size and let the scoring pick segments to merge, so there's certainly room for 
refining based on write ops in this case.

I've been wondering for a while whether maxMergeAtOnceExplicit should be made 
larger (or eliminated). Would that alter the writes the user is seeing?

All that said, pulling back the code for findForcedMerges from before 
LUCENE-7976 and using it when the number of segments is specified is certainly 
an option and would be a quick fix. 

> Forced merges merge more than necessary
> ---
>
> Key: LUCENE-8688
> URL: https://issues.apache.org/jira/browse/LUCENE-8688
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
>
> A user reported some surprise after the upgrade to Lucene 7.5 due to changes 
> to how forced merges are selected when maxSegmentCount is greater than 1.
> Before 7.5 forceMerge used to pick up the least amount of merging that would 
> result in an index that has maxSegmentCount segments at most. Now that we 
> share the same logic as regular merges, we are almost sure to pick a 
> maxMergeAtOnceExplicit-segments merge (30 segments) given that merges that 
> have more segments usually score better. This is due to the fact that natural 
> merges assume that merges that run now save work for later, so the more 
> segments get merged, the better. This assumption doesn't hold for forced 
> merges that should run on read-only indices, so there won't be any future 
> merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [lucene-solr] branch master updated: Fix escaping in Solr Reference Guide

2019-02-08 Thread Tomás Fernández Löbbe
Thanks Alan!

On Fri, Feb 8, 2019 at 5:46 AM  wrote:

> This is an automated email from the ASF dual-hosted git repository.
>
> romseygeek pushed a commit to branch master
> in repository https://gitbox.apache.org/repos/asf/lucene-solr.git
>
>
> The following commit(s) were added to refs/heads/master by this push:
>  new b80df5b  Fix escaping in Solr Reference Guide
> b80df5b is described below
>
> commit b80df5bbc006a9c6aa93b2efb1cb297c8f58596b
> Author: Alan Woodward 
> AuthorDate: Fri Feb 8 13:45:25 2019 +
>
> Fix escaping in Solr Reference Guide
> ---
>  solr/solr-ref-guide/src/language-analysis.adoc | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/solr/solr-ref-guide/src/language-analysis.adoc
> b/solr/solr-ref-guide/src/language-analysis.adoc
> index cca0387..758c7f6 100644
> --- a/solr/solr-ref-guide/src/language-analysis.adoc
> +++ b/solr/solr-ref-guide/src/language-analysis.adoc
> @@ -654,9 +654,9 @@ There are two filters written specifically for dealing
> with Bengali language. Th
>
>  
>
> -*Normalisation* - `মানুষ` -> `মানুস`
> +*Normalisation* - `মানুষ` \-> `মানুস`
>
> -*Stemming* - `সমস্ত` -> `সমস্`
> +*Stemming* - `সমস্ত` \-> `সমস্`
>
>
>  === Brazilian Portuguese
>
>


[jira] [Created] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-02-08 Thread Danyal Prout (JIRA)
Danyal Prout created SOLR-13234:
---

 Summary: Prometheus Metric Exporter Not Threadsafe
 Key: SOLR-13234
 URL: https://issues.apache.org/jira/browse/SOLR-13234
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Affects Versions: 7.6, 8.0
Reporter: Danyal Prout
 Fix For: 8.x, master (9.0)


The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
from Prometheus. Prometheus sends this request, on its [scrape 
interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
 When the time taken to collect the Solr metrics is greater than the scrape 
interval of the Prometheus server, this results in concurrent metric collection 
occurring in this 
[method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
 This method doesn’t appear to be thread safe, for instance you could have 
concurrent modifications of a 
[map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
 After a while the Solr Exporter processes becomes nondeterministic, we've 
observed NPE and loss of metrics.

To address this, I'm proposing the following fixes:

1. Read/parse the configuration at startup and make it immutable. 
 2. Collect metrics from Solr on an interval which is controlled by the Solr 
Exporter and cache the metric samples to return during Prometheus scraping. 
Metric collection can be expensive, for example executing arbitrary Solr 
searches, it's not ideal to allow for concurrent metric collection and on an 
interval which is not defined by the Solr Exporter.

There are also a few other performance improvements that we've made while 
fixing this, for example using the ClusterStateProvider instead of sending 
multiple HTTP requests to each Solr node to lookup all the cores.

I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13233) SpellCheckCollator ignores stacked tokens

2019-02-08 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763700#comment-16763700
 ] 

Alan Woodward commented on SOLR-13233:
--

I'm honestly not sure what the correct fix here is - possibly we should change 
WordDelimiterGraphFilter to emit its original token first?  And check our other 
TokenFilters to ensure that they all have this behaviour?

> SpellCheckCollator ignores stacked tokens
> -
>
> Key: SOLR-13233
> URL: https://issues.apache.org/jira/browse/SOLR-13233
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Priority: Major
>
> When building collations, SpellCheckCollator ignores any tokens with a 
> position increment of 0, assuming that they've been injected and may 
> therefore have incorrect offsets (injected terms generally keep the offsets 
> of the terms they're replacing, as they don't themselves appear anywhere in 
> the original source).  However, this assumption is not necessarily correct - 
> for example, WordDelimiterGraphFilter emits stacked tokens *before* the 
> original token, because it needs to iterate through all stacked tokens to 
> correctly set the original token's position length.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13233) SpellCheckCollator ignores stacked tokens

2019-02-08 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-13233:


 Summary: SpellCheckCollator ignores stacked tokens
 Key: SOLR-13233
 URL: https://issues.apache.org/jira/browse/SOLR-13233
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Alan Woodward


When building collations, SpellCheckCollator ignores any tokens with a position 
increment of 0, assuming that they've been injected and may therefore have 
incorrect offsets (injected terms generally keep the offsets of the terms 
they're replacing, as they don't themselves appear anywhere in the original 
source).  However, this assumption is not necessarily correct - for example, 
WordDelimiterGraphFilter emits stacked tokens *before* the original token, 
because it needs to iterate through all stacked tokens to correctly set the 
original token's position length.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8688) Forced merges merge more than necessary

2019-02-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763699#comment-16763699
 ] 

Adrien Grand commented on LUCENE-8688:
--

[~erickerickson] This seems to have been introduced in LUCENE-7976, do you have 
any opinion on this?

> Forced merges merge more than necessary
> ---
>
> Key: LUCENE-8688
> URL: https://issues.apache.org/jira/browse/LUCENE-8688
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
>
> A user reported some surprise after the upgrade to Lucene 7.5 due to changes 
> to how forced merges are selected when maxSegmentCount is greater than 1.
> Before 7.5 forceMerge used to pick up the least amount of merging that would 
> result in an index that has maxSegmentCount segments at most. Now that we 
> share the same logic as regular merges, we are almost sure to pick a 
> maxMergeAtOnceExplicit-segments merge (30 segments) given that merges that 
> have more segments usually score better. This is due to the fact that natural 
> merges assume that merges that run now save work for later, so the more 
> segments get merged, the better. This assumption doesn't hold for forced 
> merges that should run on read-only indices, so there won't be any future 
> merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8688) Forced merges merge more than necessary

2019-02-08 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8688:


 Summary: Forced merges merge more than necessary
 Key: LUCENE-8688
 URL: https://issues.apache.org/jira/browse/LUCENE-8688
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand


A user reported some surprise after the upgrade to Lucene 7.5 due to changes to 
how forced merges are selected when maxSegmentCount is greater than 1.

Before 7.5 forceMerge used to pick up the least amount of merging that would 
result in an index that has maxSegmentCount segments at most. Now that we share 
the same logic as regular merges, we are almost sure to pick a 
maxMergeAtOnceExplicit-segments merge (30 segments) given that merges that have 
more segments usually score better. This is due to the fact that natural merges 
assume that merges that run now save work for later, so the more segments get 
merged, the better. This assumption doesn't hold for forced merges that should 
run on read-only indices, so there won't be any future merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8680) Refactor EdgeTree#relateTriangle method

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763692#comment-16763692
 ] 

ASF subversion and git services commented on LUCENE-8680:
-

Commit d7d4d64f346136d34399226a4bf19c3eb28f45a3 in lucene-solr's branch 
refs/heads/branch_8x from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d7d4d64 ]

LUCENE-8680: Refactor EdgeTree#relateTriangle method


> Refactor EdgeTree#relateTriangle method
> ---
>
> Key: LUCENE-8680
> URL: https://issues.apache.org/jira/browse/LUCENE-8680
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Minor
> Attachments: LUCENE-8680.patch
>
>
> This proposal moves all the spatial logic for a component to Polygon2D and 
> Line2D. It improves readability of how each object behaves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-8.x - Build # 29 - Failure

2019-02-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/29/

All tests passed

Build Log:
[...truncated 24668 lines...]
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/build.xml:633: The 
following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/build.xml:128: 
Found 2 violations in source files (Unescaped symbol "->" on line #657, 
Unescaped symbol "->" on line #659).

Total time: 201 minutes 48 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Assigned] (LUCENE-8680) Refactor EdgeTree#relateTriangle method

2019-02-08 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera reassigned LUCENE-8680:


   Resolution: Fixed
 Assignee: Ignacio Vera
Fix Version/s: 8.x
   master (9.0)

> Refactor EdgeTree#relateTriangle method
> ---
>
> Key: LUCENE-8680
> URL: https://issues.apache.org/jira/browse/LUCENE-8680
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Minor
> Fix For: master (9.0), 8.x
>
> Attachments: LUCENE-8680.patch
>
>
> This proposal moves all the spatial logic for a component to Polygon2D and 
> Line2D. It improves readability of how each object behaves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8680) Refactor EdgeTree#relateTriangle method

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763694#comment-16763694
 ] 

ASF subversion and git services commented on LUCENE-8680:
-

Commit f79c8e6cd5fba79ed554938fee4218501c297da1 in lucene-solr's branch 
refs/heads/branch_8x from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f79c8e6 ]

LUCENE-8680: Add CHANGES.txt entry


> Refactor EdgeTree#relateTriangle method
> ---
>
> Key: LUCENE-8680
> URL: https://issues.apache.org/jira/browse/LUCENE-8680
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Minor
> Attachments: LUCENE-8680.patch
>
>
> This proposal moves all the spatial logic for a component to Polygon2D and 
> Line2D. It improves readability of how each object behaves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8680) Refactor EdgeTree#relateTriangle method

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763693#comment-16763693
 ] 

ASF subversion and git services commented on LUCENE-8680:
-

Commit 56007af4a45b1ac64cd34ced07126fff9e7f490b in lucene-solr's branch 
refs/heads/master from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=56007af ]

LUCENE-8680: Add CHANGES.txt entry


> Refactor EdgeTree#relateTriangle method
> ---
>
> Key: LUCENE-8680
> URL: https://issues.apache.org/jira/browse/LUCENE-8680
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Minor
> Attachments: LUCENE-8680.patch
>
>
> This proposal moves all the spatial logic for a component to Polygon2D and 
> Line2D. It improves readability of how each object behaves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8680) Refactor EdgeTree#relateTriangle method

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763688#comment-16763688
 ] 

ASF subversion and git services commented on LUCENE-8680:
-

Commit 06c1ebc09e1d39e3d556dc97392a565466fca9d5 in lucene-solr's branch 
refs/heads/master from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=06c1ebc ]

LUCENE-8680: Refactor EdgeTree#relateTriangle method


> Refactor EdgeTree#relateTriangle method
> ---
>
> Key: LUCENE-8680
> URL: https://issues.apache.org/jira/browse/LUCENE-8680
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Minor
> Attachments: LUCENE-8680.patch
>
>
> This proposal moves all the spatial logic for a component to Polygon2D and 
> Line2D. It improves readability of how each object behaves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-02-08 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763638#comment-16763638
 ] 

Kevin Risden commented on SOLR-9515:


I was never able to figure out why locally "ant run-maven-build 
-DskipTests=true" fails but on my local jenkins I was finally able to get it to 
work.

[https://builds.apache.org/job/Lucene-Solr-Maven-master/2484/]

The job failed due to forbiddenapi pattern not being updated in 
pom.xml.template. Fixing with commit shortly. Confirmed that Maven build is now 
successful with this change.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, 8.x, master (9.0)
>
> Attachments: SOLR-9515-fix_pom.patch, 
> SOLR-9515-forbiddenapis-maven.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13222) Improve logging in StreamingSolrClients

2019-02-08 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763664#comment-16763664
 ] 

Kevin Risden commented on SOLR-13222:
-

[~anshumg] or [~tomasflobbe] - Any concerns with this change since you both 
made changes in this area recently?

> Improve logging in StreamingSolrClients
> ---
>
> Key: SOLR-13222
> URL: https://issues.apache.org/jira/browse/SOLR-13222
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Peter Cseh
>Priority: Minor
> Attachments: SOLR-13222.patch
>
>
> The internal class of ErrorReportingConcurrentUpdateSolrClient 
>  logs the exception's [stack 
> trace|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/update/StreamingSolrClients.java#L113]
>  with the log message "error".
> Adding information about the request that belongs to the error helped us in 
> investigating intermittent issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763641#comment-16763641
 ] 

ASF subversion and git services commented on SOLR-9515:
---

Commit 4038f14ac1594f14cc1c83bcfc28e855766f37a4 in lucene-solr's branch 
refs/heads/branch_8_0 from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4038f14 ]

SOLR-9515 - Add maven forbiddenapis exclude for copied Hadoop code

Signed-off-by: Kevin Risden 


> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, 8.x, master (9.0)
>
> Attachments: SOLR-9515-fix_pom.patch, 
> SOLR-9515-forbiddenapis-maven.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763640#comment-16763640
 ] 

ASF subversion and git services commented on SOLR-9515:
---

Commit 092b22faa3b3edf8e4a96a63c8eef75c83c4305f in lucene-solr's branch 
refs/heads/branch_8x from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=092b22f ]

SOLR-9515 - Add maven forbiddenapis exclude for copied Hadoop code

Signed-off-by: Kevin Risden 


> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, 8.x, master (9.0)
>
> Attachments: SOLR-9515-fix_pom.patch, 
> SOLR-9515-forbiddenapis-maven.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9515) Update to Hadoop 3

2019-02-08 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9515:
---
Attachment: SOLR-9515-forbiddenapis-maven.patch

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, 8.x, master (9.0)
>
> Attachments: SOLR-9515-fix_pom.patch, 
> SOLR-9515-forbiddenapis-maven.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-02-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763639#comment-16763639
 ] 

ASF subversion and git services commented on SOLR-9515:
---

Commit 796fbaef766671c46b55263d750c5824fb13e8fb in lucene-solr's branch 
refs/heads/master from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=796fbae ]

SOLR-9515 - Add maven forbiddenapis exclude for copied Hadoop code

Signed-off-by: Kevin Risden 


> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, 8.x, master (9.0)
>
> Attachments: SOLR-9515-fix_pom.patch, 
> SOLR-9515-forbiddenapis-maven.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-master #2484: POMs out of sync

2019-02-08 Thread Kevin Risden
Taking care of this since this was due to SOLR-9515.

Kevin Risden


On Fri, Feb 8, 2019 at 12:06 AM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2484/
>
> No tests ran.
>
> Build Log:
> [...truncated 32183 lines...]
> BUILD FAILED
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:679:
> The following error occurred while executing this line:
> : Java returned: 1
>
> Total time: 17 minutes 36 seconds
> Build step 'Invoke Ant' marked build as failure
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


[GitHub] msokolov commented on a change in pull request #562: Don't create a LeafCollector when the Scorer for the leaf is null

2019-02-08 Thread GitBox
msokolov commented on a change in pull request #562: Don't create a 
LeafCollector when the Scorer for the leaf is null
URL: https://github.com/apache/lucene-solr/pull/562#discussion_r255061376
 
 

 ##
 File path: 
lucene/test-framework/src/java/org/apache/lucene/search/QueryUtils.java
 ##
 @@ -40,9 +39,9 @@
 import org.apache.lucene.util.LuceneTestCase;
 import org.apache.lucene.util.Version;
 
-import static junit.framework.Assert.assertEquals;
-import static junit.framework.Assert.assertFalse;
-import static junit.framework.Assert.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 Review comment:
   It's not required. I did it because junit.framework.Assert is deprecated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8685) Refactor LatLonShape tests

2019-02-08 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763549#comment-16763549
 ] 

Lucene/Solr QA commented on LUCENE-8685:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} Validate source patterns {color} | 
{color:red}  0m 21s{color} | {color:red} Validate source patterns 
validate-source-patterns failed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} sandbox in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  4m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8685 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957912/LUCENE-8685.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / f2b8457 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_191 |
| Validate source patterns | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/164/artifact/out/patch-validate-source-patterns-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/164/testReport/ |
| modules | C: lucene/sandbox U: lucene/sandbox |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/164/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Refactor LatLonShape tests
> --
>
> Key: LUCENE-8685
> URL: https://issues.apache.org/jira/browse/LUCENE-8685
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Ignacio Vera
>Priority: Trivial
> Attachments: LUCENE-8685.patch
>
>
> The test class {{TestLatLonShape}} is becoming pretty big and it has a 
> mixture of test. I would like to put the test that are focus on the encoding 
> in its own test class. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23639 - Still Failing!

2019-02-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23639/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 24572 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:633: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:128: Found 2 
violations in source files (Unescaped symbol "->" on line #657, Unescaped 
symbol "->" on line #659).

Total time: 62 minutes 50 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 279 - Failure

2019-02-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/279/

All tests passed

Build Log:
[...truncated 24432 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/build.xml:642:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/build.xml:128:
 Found 2 violations in source files (Unescaped symbol "->" on line #657, 
Unescaped symbol "->" on line #659).

Total time: 97 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] iverase commented on a change in pull request #569: LUCENE-8687: Optimise radix partitioning for points on heap

2019-02-08 Thread GitBox
iverase commented on a change in pull request #569: LUCENE-8687: Optimise radix 
partitioning for points on heap
URL: https://github.com/apache/lucene-solr/pull/569#discussion_r255052024
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -82,30 +82,40 @@ public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortInHeap, Di
* the split happens. The method destroys the original writer.
*
*/
-  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+  public byte[] select(PathSlice points, PathSlice[] slices, long from, long 
to, long partitionPoint, int dim, int dimCommonPrefix) throws IOException {
 
 Review comment:
   I have relaxed the condition and it should be bigger than 1. Javadocs added


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] iverase commented on a change in pull request #569: LUCENE-8687: Optimise radix partitioning for points on heap

2019-02-08 Thread GitBox
iverase commented on a change in pull request #569: LUCENE-8687: Optimise radix 
partitioning for points on heap
URL: https://github.com/apache/lucene-solr/pull/569#discussion_r255046022
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -196,33 +208,45 @@ private int findCommonPrefix(OfflinePointWriter points, 
long from, long to, int
 //special case when be have lot of points that are equal
 if (commonPrefix == bytesSorted - 1) {
   long tieBreakCount =(partitionPoint - from - leftCount);
-  partition(points, left,  right, null, from, to, dim, commonPrefix, 
tieBreakCount);
+  offlinePartition(points, left,  right, null, from, to, dim, 
commonPrefix, tieBreakCount);
   return partitionPointFromCommonPrefix();
 }
 
 //create the delta points writer
 PointWriter deltaPoints;
-if (delta <= maxPointsSortInHeap) {
+if (delta <= getMaxPointsSortInHeap(left, right)) {
   deltaPoints =  new HeapPointWriter(Math.toIntExact(delta), 
Math.toIntExact(delta), packedBytesLength);
 } else {
   deltaPoints = new OfflinePointWriter(tempDir, tempFileNamePrefix, 
packedBytesLength, "delta" + iteration, delta);
 }
 //divide the points. This actually destroys the current writer
-partition(points, left, right, deltaPoints, from, to, dim, commonPrefix, 
0);
+offlinePartition(points, left, right, deltaPoints, from, to, dim, 
commonPrefix, 0);
 //close delta point writer
 deltaPoints.close();
 
 long newPartitionPoint = partitionPoint - from - leftCount;
 
 if (deltaPoints instanceof HeapPointWriter) {
-  return heapSelect((HeapPointWriter) deltaPoints, left, right, dim, 0, 
(int) deltaPoints.count(), Math.toIntExact(newPartitionPoint), ++commonPrefix);
+  return heapPartition((HeapPointWriter) deltaPoints, left, right, dim, 0, 
(int) deltaPoints.count(), Math.toIntExact(newPartitionPoint), ++commonPrefix);
 } else {
   return buildHistogramAndPartition((OfflinePointWriter) deltaPoints, 
left, right, 0, deltaPoints.count(), newPartitionPoint, ++iteration, 
++commonPrefix, dim);
 }
   }
 
-  private void partition(OfflinePointWriter points, PointWriter left, 
PointWriter right, PointWriter deltaPoints,
-   long from, long to, int dim, int bytePosition, long 
numDocsTiebreak) throws IOException {
+  private int getMaxPointsSortInHeap(PointWriter left, PointWriter right) {
+long pointsUsed = 0;
+if (left instanceof HeapPointWriter) {
+  pointsUsed += left.count();
+}
+if (right instanceof HeapPointWriter) {
+  pointsUsed += right.count();
+}
+assert maxPointsSortInHeap >= pointsUsed;
+return maxPointsSortInHeap - (int) pointsUsed;
 
 Review comment:
   I have changed the logic a bit and I am using the `maxSize` on the 
`HeapPointwriter` to calculate the offset to move into heap the selection.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] iverase commented on a change in pull request #569: LUCENE-8687: Optimise radix partitioning for points on heap

2019-02-08 Thread GitBox
iverase commented on a change in pull request #569: LUCENE-8687: Optimise radix 
partitioning for points on heap
URL: https://github.com/apache/lucene-solr/pull/569#discussion_r255045631
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -294,18 +335,37 @@ protected int byteAt(int i, int k) {
   }
 }.select(from, to, partitionPoint);
 
-for (int i = from; i < to; i++) {
-  points.getPackedValueSlice(i, bytesRef1);
-  int docID = points.docIDs[i];
-  if (i < partitionPoint) {
-left.append(bytesRef1, docID);
-  } else {
-right.append(bytesRef1, docID);
-  }
-}
 byte[] partition = new byte[bytesPerDim];
 points.getPackedValueSlice(partitionPoint, bytesRef1);
 System.arraycopy(bytesRef1.bytes, bytesRef1.offset + dim * bytesPerDim, 
partition, 0, bytesPerDim);
 return partition;
   }
+
+  PointWriter getPointWriter(long count, String desc) throws IOException {
+if (count <= maxPointsSortInHeap / 2) {
 
 Review comment:
   We divided by 2 because as we recurse, we have two `heapPointWriter` as 
maximum, so they should not hold more than half of the `maxPointsSortInHeap`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-8.x #19: POMs out of sync

2019-02-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-8.x/19/

No tests ran.

Build Log:
[...truncated 32231 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-8.x/build.xml:679: The 
following error occurred while executing this line:
: Java returned: 1

Total time: 16 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-9.0.4) - Build # 146 - Failure!

2019-02-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/146/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2068 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/core/test/temp/junit4-J2-20190208_093651_93412904576171100990579.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/core/test/temp/junit4-J1-20190208_093651_934178237219837285076.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/core/test/temp/junit4-J0-20190208_093651_93411958774740360588193.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 296 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190208_094607_5198914394922147350718.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190208_094607_519962408124768733124.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 14 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190208_094607_519554311289313206.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 1080 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190208_094729_00814873870473027854143.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190208_094729_00817913950917714566945.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190208_094729_00817423810902533841638.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 255 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/icu/test/temp/junit4-J2-20190208_094911_8529569293275284553503.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1253 - Failure

2019-02-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1253/

No tests ran.

Build Log:
[...truncated 23441 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2476 links (2022 relative) to 3299 anchors in 249 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[GitHub] jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix partitioning for points on heap

2019-02-08 Thread GitBox
jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix 
partitioning for points on heap
URL: https://github.com/apache/lucene-solr/pull/569#discussion_r254985158
 
 

 ##
 File path: 
lucene/codecs/src/java/org/apache/lucene/codecs/simpletext/SimpleTextBKDWriter.java
 ##
 @@ -1030,15 +1017,17 @@ private void build(int nodeID, int leafNodeOffset,
   // We can write the block in any order so by default we write it sorted 
by the dimension that has the
   // least number of unique bytes at commonPrefixLengths[dim], which makes 
compression more efficient
 
-  if (data instanceof HeapPointWriter == false) {
+  HeapPointWriter heapSource;
+  if (points.writer instanceof HeapPointWriter == false) {
 // Adversarial cases can cause this, e.g. very lopsided data, all 
equal points, such that we started
 // offline, but then kept splitting only in one dimension, and so 
never had to rewrite into heap writer
-data = switchToHeap(data);
+heapSource  = switchToHeap(points.writer);
+  } else {
+heapSource = (HeapPointWriter) points.writer;
   }
 
-  // We ensured that maxPointsSortInHeap was >= maxPointsInLeafNode, so we 
better be in heap at this point:
-  HeapPointWriter heapSource = (HeapPointWriter) data;
-
+  int from = (int) points.start;
+  int to = (int) (points.start + points.count);
 
 Review comment:
   can you use Math#toIntExact instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix partitioning for points on heap

2019-02-08 Thread GitBox
jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix 
partitioning for points on heap
URL: https://github.com/apache/lucene-solr/pull/569#discussion_r255002577
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -82,30 +82,40 @@ public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortInHeap, Di
* the split happens. The method destroys the original writer.
*
*/
-  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+  public byte[] select(PathSlice points, PathSlice[] slices, long from, long 
to, long partitionPoint, int dim, int dimCommonPrefix) throws IOException {
 
 Review comment:
   can you document in javadocs that `slices` should have a length of 2 and how 
it gets filled?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix partitioning for points on heap

2019-02-08 Thread GitBox
jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix 
partitioning for points on heap
URL: https://github.com/apache/lucene-solr/pull/569#discussion_r255003163
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -176,7 +188,7 @@ private int findCommonPrefix(OfflinePointWriter points, 
long from, long to, int
 histogram[commonPrefix][bucket]++;
   }
 }
-//Count left points and record the partition point
+//Count left points and record the offlinePartition point
 
 Review comment:
   bad search/replace?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix partitioning for points on heap

2019-02-08 Thread GitBox
jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix 
partitioning for points on heap
URL: https://github.com/apache/lucene-solr/pull/569#discussion_r255007717
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -294,18 +335,37 @@ protected int byteAt(int i, int k) {
   }
 }.select(from, to, partitionPoint);
 
-for (int i = from; i < to; i++) {
-  points.getPackedValueSlice(i, bytesRef1);
-  int docID = points.docIDs[i];
-  if (i < partitionPoint) {
-left.append(bytesRef1, docID);
-  } else {
-right.append(bytesRef1, docID);
-  }
-}
 byte[] partition = new byte[bytesPerDim];
 points.getPackedValueSlice(partitionPoint, bytesRef1);
 System.arraycopy(bytesRef1.bytes, bytesRef1.offset + dim * bytesPerDim, 
partition, 0, bytesPerDim);
 return partition;
   }
+
+  PointWriter getPointWriter(long count, String desc) throws IOException {
+if (count <= maxPointsSortInHeap / 2) {
 
 Review comment:
   why do we divide by 2? can you add a comment?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix partitioning for points on heap

2019-02-08 Thread GitBox
jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix 
partitioning for points on heap
URL: https://github.com/apache/lucene-solr/pull/569#discussion_r255006701
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -82,30 +82,40 @@ public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortInHeap, Di
* the split happens. The method destroys the original writer.
*
*/
-  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+  public byte[] select(PathSlice points, PathSlice[] slices, long from, long 
to, long partitionPoint, int dim, int dimCommonPrefix) throws IOException {
 checkArgs(from, to, partitionPoint);
 
+assert slices.length == 2;
+
 //If we are on heap then we just select on heap
-if (points instanceof HeapPointWriter) {
-  return heapSelect((HeapPointWriter) points, left, right, dim, 
Math.toIntExact(from), Math.toIntExact(to),  Math.toIntExact(partitionPoint), 
0);
+if (points.writer instanceof HeapPointWriter) {
+  byte[] partition = heapRadixSelect((HeapPointWriter) points.writer, dim, 
Math.toIntExact(from), Math.toIntExact(to),  Math.toIntExact(partitionPoint), 
dimCommonPrefix);
+  slices[0] = new PathSlice(points.writer, from, partitionPoint - from);
+  slices[1] = new PathSlice(points.writer, partitionPoint, to - 
partitionPoint);
+  return partition;
 }
 
 //reset histogram
 for (int i = 0; i < bytesSorted; i++) {
   Arrays.fill(histogram[i], 0);
 }
-OfflinePointWriter offlinePointWriter = (OfflinePointWriter) points;
+OfflinePointWriter offlinePointWriter = (OfflinePointWriter) points.writer;
 
-//find common prefix, it does already set histogram values if needed
-int commonPrefix = findCommonPrefix(offlinePointWriter, from, to, dim);
+//find common prefix from dimCommonPrefix, it does already set histogram 
values if needed
+int commonPrefix = findCommonPrefix(offlinePointWriter, from, to, dim, 
dimCommonPrefix);
 
-//if all equals we just partition the data
-if (commonPrefix ==  bytesSorted) {
-  partition(offlinePointWriter, left,  right, null, from, to, dim, 
commonPrefix - 1, partitionPoint);
-  return partitionPointFromCommonPrefix();
+try (PointWriter left = getPointWriter(partitionPoint - from, "left" + 
dim);
+ PointWriter right = getPointWriter(to - partitionPoint, "right" + 
dim)) {
+  slices[0] = new PathSlice(left, 0, partitionPoint - from);
+  slices[1] = new PathSlice(right, 0, to - partitionPoint);
+  //if all equals we just offlinePartition the data
 
 Review comment:
   bad search/replace?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix partitioning for points on heap

2019-02-08 Thread GitBox
jpountz commented on a change in pull request #569: LUCENE-8687: Optimise radix 
partitioning for points on heap
URL: https://github.com/apache/lucene-solr/pull/569#discussion_r255007427
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -196,33 +208,45 @@ private int findCommonPrefix(OfflinePointWriter points, 
long from, long to, int
 //special case when be have lot of points that are equal
 if (commonPrefix == bytesSorted - 1) {
   long tieBreakCount =(partitionPoint - from - leftCount);
-  partition(points, left,  right, null, from, to, dim, commonPrefix, 
tieBreakCount);
+  offlinePartition(points, left,  right, null, from, to, dim, 
commonPrefix, tieBreakCount);
   return partitionPointFromCommonPrefix();
 }
 
 //create the delta points writer
 PointWriter deltaPoints;
-if (delta <= maxPointsSortInHeap) {
+if (delta <= getMaxPointsSortInHeap(left, right)) {
   deltaPoints =  new HeapPointWriter(Math.toIntExact(delta), 
Math.toIntExact(delta), packedBytesLength);
 } else {
   deltaPoints = new OfflinePointWriter(tempDir, tempFileNamePrefix, 
packedBytesLength, "delta" + iteration, delta);
 }
 //divide the points. This actually destroys the current writer
-partition(points, left, right, deltaPoints, from, to, dim, commonPrefix, 
0);
+offlinePartition(points, left, right, deltaPoints, from, to, dim, 
commonPrefix, 0);
 //close delta point writer
 deltaPoints.close();
 
 long newPartitionPoint = partitionPoint - from - leftCount;
 
 if (deltaPoints instanceof HeapPointWriter) {
-  return heapSelect((HeapPointWriter) deltaPoints, left, right, dim, 0, 
(int) deltaPoints.count(), Math.toIntExact(newPartitionPoint), ++commonPrefix);
+  return heapPartition((HeapPointWriter) deltaPoints, left, right, dim, 0, 
(int) deltaPoints.count(), Math.toIntExact(newPartitionPoint), ++commonPrefix);
 } else {
   return buildHistogramAndPartition((OfflinePointWriter) deltaPoints, 
left, right, 0, deltaPoints.count(), newPartitionPoint, ++iteration, 
++commonPrefix, dim);
 }
   }
 
-  private void partition(OfflinePointWriter points, PointWriter left, 
PointWriter right, PointWriter deltaPoints,
-   long from, long to, int dim, int bytePosition, long 
numDocsTiebreak) throws IOException {
+  private int getMaxPointsSortInHeap(PointWriter left, PointWriter right) {
+long pointsUsed = 0;
+if (left instanceof HeapPointWriter) {
+  pointsUsed += left.count();
+}
+if (right instanceof HeapPointWriter) {
+  pointsUsed += right.count();
+}
+assert maxPointsSortInHeap >= pointsUsed;
+return maxPointsSortInHeap - (int) pointsUsed;
 
 Review comment:
   can we check this cast?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8687) Optimise radix partitioning for points on heap

2019-02-08 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8687:
-
Description: 
In LUCENE-8673 it was introduced radix partitioning for merging segments. It 
currently works the same when you have data offline and on-heap. It makes sense 
when data is on-heap, not to have multiple copies but perform the partitioning 
always in the same object, similar to what it is done with 
`MutablePointValues`. 

This will allow to hold more points in memory as well because we don't have 
multiple copies of the same data as we recurse.

  was:
In LUCENE-8673 it was introduced radix partitioning for merging segments. It 
currently works the same when you have data offline and or heap. It makes sense 
when data is on-heap, to not have multiple copies but perform the partitioning 
always in the same object, similar to what it is done with 
`MutablePointValues`. 

This will allow as well to hold more points in memory.


> Optimise radix partitioning for points on heap
> --
>
> Key: LUCENE-8687
> URL: https://issues.apache.org/jira/browse/LUCENE-8687
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In LUCENE-8673 it was introduced radix partitioning for merging segments. It 
> currently works the same when you have data offline and on-heap. It makes 
> sense when data is on-heap, not to have multiple copies but perform the 
> partitioning always in the same object, similar to what it is done with 
> `MutablePointValues`. 
> This will allow to hold more points in memory as well because we don't have 
> multiple copies of the same data as we recurse.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8682) Remove WordDelimiterFilter

2019-02-08 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763426#comment-16763426
 ] 

Lucene/Solr QA commented on LUCENE-8682:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} Validate source patterns {color} | 
{color:red}  0m 29s{color} | {color:red} Validate source patterns 
validate-source-patterns failed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 55s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 17s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | lucene.analysis.core.TestBugInSomething |
|   | solr.spelling.SpellCheckCollatorTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8682 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957893/LUCENE-8682.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / f2b8457 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_191 |
| Validate source patterns | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/163/artifact/out/patch-validate-source-patterns-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/163/artifact/out/patch-unit-lucene_analysis_common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/163/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/163/testReport/ |
| modules | C: lucene/analysis/common solr/core U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/163/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Remove WordDelimiterFilter
> --
>
> Key: LUCENE-8682
> URL: https://issues.apache.org/jira/browse/LUCENE-8682
> Project: Lucene - Core
>  Issue Type: Task
>Affects Versions: master (9.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8682.patch
>
>
> WordDelimiterFilter was deprecated a while back.  We can remove it entirely 
> from the master branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 7.7.0 RC1

2019-02-08 Thread jim ferenczi
This vote has passed. Thanks everyone for voting, I will proceed with the
next steps and will announce the release on Monday.

Le jeu. 7 févr. 2019 à 18:33, Tomás Fernández Löbbe 
a écrit :

> +1
> SUCCESS! [1:05:29.028759]
>
> On Thu, Feb 7, 2019 at 7:20 AM Uwe Schindler  wrote:
>
>> Oh, I missed to give my explicit +1!
>>
>>
>>
>> -
>>
>> Uwe Schindler
>>
>> Achterdiek 19, D-28357 Bremen
>>
>> http://www.thetaphi.de
>>
>> eMail: u...@thetaphi.de
>>
>>
>>
>> *From:* Uwe Schindler 
>> *Sent:* Wednesday, February 6, 2019 5:11 PM
>> *To:* dev@lucene.apache.org
>> *Subject:* RE: [VOTE] Release Lucene/Solr 7.7.0 RC1
>>
>>
>>
>> Hi,
>>
>>
>>
>> I instructed Policeman Jenkins to do the release checks for me, and it
>> tested both Java 8 **and** Java 9:
>>
>> https://jenkins.thetaphi.de/job/Lucene-Solr-Release-Tester/11/consoleFull
>>
>>
>>
>> In short (full log, see above):
>>
>> SUCCESS! [2:22:08.689789]
>>
>>
>>
>> Finished: SUCCESS
>>
>>
>>
>> Personally, I also checked and downloaded the binary distributions on
>> Windows. Solr starts perfectly with Java 8 and Java 11 for me under Windows
>> 10. Changes files look fine.
>>
>>
>>
>> Uwe
>>
>>
>>
>> -
>>
>> Uwe Schindler
>>
>> Achterdiek 19, D-28357 Bremen
>>
>> http://www.thetaphi.de
>>
>> eMail: u...@thetaphi.de
>>
>>
>>
>> *From:* jim ferenczi 
>> *Sent:* Tuesday, February 5, 2019 9:34 AM
>> *To:* dev@lucene.apache.org
>> *Subject:* [VOTE] Release Lucene/Solr 7.7.0 RC1
>>
>>
>>
>> Please vote for release candidate 1 for Lucene/Solr 7.7.0
>>
>>
>>
>> The artifacts can be downloaded from:
>>
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.7.0-RC1-rev8c831daf4eb41153c25ddb152501ab5bae3ea3d5
>>
>>
>>
>> You can run the smoke tester directly with this command:
>>
>>
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.7.0-RC1-rev8c831daf4eb41153c25ddb152501ab5bae3ea3d5
>>
>>
>>
>> Here's my +1
>>
>> SUCCESS! [1:08:39.903675]
>>
>


[jira] [Commented] (LUCENE-8673) Use radix partitioning when merging dimensional points

2019-02-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763405#comment-16763405
 ] 

Adrien Grand commented on LUCENE-8673:
--

The indexing throughput jump is ridiculous. 
http://people.apache.org/~mikemccand/geobench.html#index-times :)

> Use radix partitioning when merging dimensional points
> --
>
> Key: LUCENE-8673
> URL: https://issues.apache.org/jira/browse/LUCENE-8673
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Fix For: master (9.0), 8.x
>
> Attachments: Geo3D.png, Geo3D.png, Geo3D.png, LatLonPoint.png, 
> LatLonPoint.png, LatLonPoint.png, LatLonShape.png, LatLonShape.png, 
> LatLonShape.png
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Following the advise of [~jpountz] in LUCENE-8623I have investigated using 
> radix selection when merging segments instead of sorting the data at the 
> beginning. The results are pretty promising when running Lucene geo 
> benchmarks:
>  
> ||Approach||Index time (sec): Dev||Index Time (sec): Base||Index Time: 
> Diff||Force merge time (sec): Dev||Force Merge time (sec): Base||Force Merge 
> Time: Diff||Index size (GB): Dev||Index size (GB): Base||Index Size: 
> Diff||Reader heap (MB): Dev||Reader heap (MB): Base||Reader heap: Diff
> |points|241.5s|235.0s| 3%|157.2s|157.9s|-0%|0.55|0.55| 0%|1.57|1.57| 0%|
> |shapes|416.1s|650.1s|-36%|306.1s|603.2s|-49%|1.29|1.29| 0%|1.61|1.61| 0%|
> |geo3d|261.0s|360.1s|-28%|170.2s|279.9s|-39%|0.75|0.75| 0%|1.58|1.58| 0%|
>  
> edited: table formatting to be a jira table
>  
> In 2D the index throughput is more or less equal but for higher dimensions 
> the impact is quite big. In all cases the merging process requires much less 
> disk space, I am attaching plots showing the different behaviour and I am 
> opening a pull request.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org