[jira] [Commented] (LUCENE-8655) No possibility to access to the underlying "valueSource" of a FunctionScoreQuery

2019-02-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760544#comment-16760544
 ] 

Gérald Quaire commented on LUCENE-8655:
---

Hello [~romseygeek],

 

I think my patch is ready. How do I make it embeddable into the next 7.7 
release of Solr? Thank you in advance for your help.

 

 

> No possibility to access to the underlying "valueSource" of a 
> FunctionScoreQuery 
> -
>
> Key: LUCENE-8655
> URL: https://issues.apache.org/jira/browse/LUCENE-8655
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.6
>Reporter: Gérald Quaire
>Priority: Major
>  Labels: patch
> Attachments: LUCENE-8655.patch, LUCENE-8655.patch
>
>
> After LUCENE-8099, the "BoostedQuery" is deprecated by the use of the 
> "FunctionScoreQuery". With the BoostedQuery, it was possible to access at its 
> underlying "valueSource". But it is not the case with the class 
> "FunctionScoreQuery". It has got only a getter for the wrapped query,  
> For development of specific parsers, it would be necessary to access the 
> valueSource of a "FunctionScoreQuery". I suggest to add a new getter into the 
> class "FunctionScoreQuery" like below:
> {code:java}
>  /**
>    * @return the wrapped Query
>    */
>   public Query getWrappedQuery() {
>     return in;
>   }
>  /**
>    * @return the a source of scores
>    */
>   public DoubleValuesSource getValueSource() {
>     return source;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-master #2482: POMs out of sync

2019-02-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2482/

No tests ran.

Build Log:
[...truncated 32157 lines...]
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:679: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 29 minutes 13 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8673) Use radix partitioning when merging dimensional points

2019-02-04 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760510#comment-16760510
 ] 

Ignacio Vera commented on LUCENE-8673:
--

I have done changes recommended by [~jpountz] and I replaced the on-heap 
selector from an IntroSelector to a radixSelector. These have provided another 
nice bump on performance. Benchmarks looks like this now:


||Approach||Index time (sec): Dev||Index Time (sec): Base||Index Time: 
Diff||Force merge time (sec): Dev||Force Merge time (sec): Base||Force Merge 
Time: Diff||Index size (GB): Dev||Index size (GB): Base||Index Size: 
Diff||Reader heap (MB): Dev||Reader heap (MB): Base||Reader heap: Diff||
|points|182.2s|227.9s|-20%|90.4s|143.1s|-37%|0.55|0.55| 0%|1.57|1.57| 0%|
|shapes|297.0s|624.4s|-52%|163.8s|549.3s|-70%|1.29|1.29| 0%|1.61|1.61| 0%|
|geo3d|210.3s|370.1s|-43%|104.3s|265.3s|-61%|0.75|0.75| 0%|1.58|1.58| 0%|

 
 

 

> Use radix partitioning when merging dimensional points
> --
>
> Key: LUCENE-8673
> URL: https://issues.apache.org/jira/browse/LUCENE-8673
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: Geo3D.png, Geo3D.png, LatLonPoint.png, LatLonPoint.png, 
> LatLonShape.png, LatLonShape.png
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Following the advise of [~jpountz] in LUCENE-8623I have investigated using 
> radix selection when merging segments instead of sorting the data at the 
> beginning. The results are pretty promising when running Lucene geo 
> benchmarks:
>  
> ||Approach||Index time (sec): Dev||Index Time (sec): Base||Index Time: 
> Diff||Force merge time (sec): Dev||Force Merge time (sec): Base||Force Merge 
> Time: Diff||Index size (GB): Dev||Index size (GB): Base||Index Size: 
> Diff||Reader heap (MB): Dev||Reader heap (MB): Base||Reader heap: Diff
> |points|241.5s|235.0s| 3%|157.2s|157.9s|-0%|0.55|0.55| 0%|1.57|1.57| 0%|
> |shapes|416.1s|650.1s|-36%|306.1s|603.2s|-49%|1.29|1.29| 0%|1.61|1.61| 0%|
> |geo3d|261.0s|360.1s|-28%|170.2s|279.9s|-39%|0.75|0.75| 0%|1.58|1.58| 0%|
>  
> edited: table formatting to be a jira table
>  
> In 2D the index throughput is more or less equal but for higher dimensions 
> the impact is quite big. In all cases the merging process requires much less 
> disk space, I am attaching plots showing the different behaviour and I am 
> opening a pull request.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6741) IPv6 Field Type

2019-02-04 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760491#comment-16760491
 ] 

David Smiley commented on SOLR-6741:


Thanks for volunteering Dale!  Don't worry about the ability to assign 
yourself; it's only possible for committers and certain "contributors".  It's 
enough to just declare you're going to work on it.

> IPv6 Field Type
> ---
>
> Key: SOLR-6741
> URL: https://issues.apache.org/jira/browse/SOLR-6741
> Project: Solr
>  Issue Type: Improvement
>Reporter: Lloyd Ramey
>Priority: Major
> Attachments: SOLR-6741.patch
>
>
> It would be nice if Solr had a field type which could be used to index IPv6 
> data and supported efficient range queries. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 17 - Still Failing

2019-02-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/17/

No tests ran.

Build Log:
[...truncated 23465 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2467 links (2018 relative) to 3229 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: 

[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-12-ea+23) - Build # 984 - Unstable!

2019-02-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/984/
Java: 64bit/jdk-12-ea+23 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Tue Feb 05 06:26:08 
EET 2019

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Tue Feb 05 06:26:08 EET 2019
at 
__randomizedtesting.SeedInfo.seed([7B6B57256E842790:8C18B97DA86C8876]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1626)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1396)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  
org.apache.solr.handler.component.InfixSuggestersTest.testReloadDuringBuild

Error Message:

[JENKINS] Lucene-Solr-NightlyTests-7.7 - Build # 4 - Unstable

2019-02-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.7/4/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.ExecutePlanActionTest

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [ZkStateReader, 
SolrZkClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.cloud.ZkStateReader  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:328)  
at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:827)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)  at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)  at 
org.apache.solr.client.solrj.impl.SolrClientCloudManager.request(SolrClientCloudManager.java:115)
  at 
org.apache.solr.cloud.autoscaling.ExecutePlanAction.waitForTaskToFinish(ExecutePlanAction.java:132)
  at 
org.apache.solr.cloud.autoscaling.ExecutePlanAction.process(ExecutePlanAction.java:85)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:325)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.cloud.SolrZkClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:203)  
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:126)  at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)  at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:306)  at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:827)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)  at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)  at 
org.apache.solr.client.solrj.impl.SolrClientCloudManager.request(SolrClientCloudManager.java:115)
  at 
org.apache.solr.cloud.autoscaling.ExecutePlanAction.waitForTaskToFinish(ExecutePlanAction.java:132)
  at 
org.apache.solr.cloud.autoscaling.ExecutePlanAction.process(ExecutePlanAction.java:85)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:325)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)   expected null, but 
was:(ZkStateReader.java:328)  
at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:827)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)  at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)  at 
org.apache.solr.client.solrj.impl.SolrClientCloudManager.request(SolrClientCloudManager.java:115)
  at 
org.apache.solr.cloud.autoscaling.ExecutePlanAction.waitForTaskToFinish(ExecutePlanAction.java:132)
  at 
org.apache.solr.cloud.autoscaling.ExecutePlanAction.process(ExecutePlanAction.java:85)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:325)
  at 

[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2019-02-04 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760339#comment-16760339
 ] 

Kevin Risden commented on SOLR-5007:


Well this uncovered a bunch of other code/tests that aren't doing the right 
thing. Backup/restore doesn't seem to properly close things. HdfsLockFactory 
doesn't close the Filesystem it creates. Planning to look at this more but 
there are some definite resource management issues here that have been covered 
up for a long time.

> TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
> failing a completely different test.
> 
>
> Key: SOLR-5007
> URL: https://issues.apache.org/jira/browse/SOLR-5007
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-5007.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13213) Search Components cannot modify "shards" parameter

2019-02-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760296#comment-16760296
 ] 

Jan Høydahl commented on SOLR-13213:


Yea, I first tried to move the shardHandler init below prepare but that broke 
some logic in DebugComponent. And even if tests pass by prepping isDistrib, I 
guess that custom stuff may assume som order. Although the type og logic 
(change some query params, amend the result) is perfect for SearchComponent, I 
may agree with you that due to the chicken/egg nature of isDistrib it may be 
safer to set this in a custom SearchHandler, although that beast is not as 
nicely extensible as components, i.e. the handleRequestBody is one huge code 
block. I'll try and if successful I'll close this jira.

> Search Components cannot modify "shards" parameter
> --
>
> Key: SOLR-13213
> URL: https://issues.apache.org/jira/browse/SOLR-13213
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When creating a custom search component for a customer, I realised that 
> modifying "shards" parameter in {{prepare()}} is not possible since in 
> {{SearchHandler}}, the {{ShardHandler}} is initialised based on "shards" 
> parameter just *before* search components are consulted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13217) collapse parser with /export request handler throws NPE

2019-02-04 Thread Rahul Goswami (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Goswami updated SOLR-13217:
-
Description: 
A NullPointerException is thrown when trying to use the /export handler with a 
search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 [http://localhost:8983/solr/mycollection/stream/?expr=search(mycollection] 
,sort="field1 asc,field2 
asc",fl="fileld1,field2,field3",qt="/export",q="*:*",fq="((field4:1) OR 
(field4:2))",fq="

{!collapse field=id_field sort='field3 desc'}

")

 

I made sure that collapse parser here is the problem by removing all other 
filter queries and retaining only collapse filter query. The stacktrace is as 
below: 

 

org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
 at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
 at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
 at org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
 at 
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
 at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
 at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
 at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
 at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at org.eclipse.jetty.server.Server.handle(Server.java:539)
 at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
 at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
 at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
 at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.base/java.lang.Thread.run(Thread.java:834)

  was:
A NullPointerException is obtained when trying to use the /export handler with 
a search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 

[jira] [Updated] (SOLR-13217) collapse parser with /export request handler throws NPE

2019-02-04 Thread Rahul Goswami (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Goswami updated SOLR-13217:
-
Description: 
A NullPointerException is thrown when trying to use the /export handler with a 
search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 [http://localhost:8983/solr/mycollection/stream/?expr=search(mycollection] 
,sort="field1 asc,field2 
asc",fl="fileld1,field2,field3",qt="/export",q="*:*",fq="((field4:1) OR 
(field4:2))",fq="

{!collapse field=id_field sort='field3 desc'}")

 

I made sure that collapse parser here is the problem by removing all other 
filter queries and retaining only collapse filter query. The stacktrace is as 
below: 

 

org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
 at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
 at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
 at org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
 at 
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
 at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
 at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
 at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
 at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at org.eclipse.jetty.server.Server.handle(Server.java:539)
 at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
 at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
 at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
 at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.base/java.lang.Thread.run(Thread.java:834)

  was:
A NullPointerException is thrown when trying to use the /export handler with a 
search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 [http://localhost:8983/solr/mycollection/stream/?expr=search(mycollection] 

[jira] [Commented] (SOLR-13213) Search Components cannot modify "shards" parameter

2019-02-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760290#comment-16760290
 ] 

Tomás Fernández Löbbe commented on SOLR-13213:
--

I'm wondering if this is the right move. Components will now lack of all the 
information added by {{prepDistributed}} (which may be a break for existing 
custom components). Also, looking at this code in the 
{{SearchHandler.getAndPrepShardHandler}}:
{code:java}
if (rb.isDistrib) {
  shardHandler = shardHandlerFactory.getShardHandler();
  shardHandler.prepDistributed(rb);
  if (!rb.isDistrib) {
shardHandler = null; // request is not distributed after all and so the 
shard handler is not needed
  }
}
{code}
Looks like {{prepDistributed}} can change the value of {{rb.isDistrib}}, and in 
this case the components would have gotten a wrong value. I'm not saying we 
shouldn't do the change, but it does look like a breaking one. Maybe the right 
move in this case is to have a custom SearchHandler that implements the logic 
to decide distrib=true/false, and then call {{super()}}?  Or a custom 
{{ShardHandlerFactory}}?

> Search Components cannot modify "shards" parameter
> --
>
> Key: SOLR-13213
> URL: https://issues.apache.org/jira/browse/SOLR-13213
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When creating a custom search component for a customer, I realised that 
> modifying "shards" parameter in {{prepare()}} is not possible since in 
> {{SearchHandler}}, the {{ShardHandler}} is initialised based on "shards" 
> parameter just *before* search components are consulted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13217) collapse parser with /export request handler throws NPE

2019-02-04 Thread Rahul Goswami (JIRA)
Rahul Goswami created SOLR-13217:


 Summary: collapse parser with /export request handler throws NPE
 Key: SOLR-13217
 URL: https://issues.apache.org/jira/browse/SOLR-13217
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.2.1
Reporter: Rahul Goswami


A NullPointerException is obtained when trying to use the /export handler with 
a search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 [http://localhost:8983/solr/mycollection/stream/?expr=search(mycollection] 
,sort="field1 asc,field2 
asc",fl="fileld1,field2,field3",qt="/export",q="*:*",fq="((field4:1) OR 
(field4:2))",fq="{!collapse field=id_field sort='field3 desc'}")

 

I made sure that collapse parser here is the problem by removing all other 
filter queries and retaining only collapse filter query. The stacktrace is as 
below: 

 

org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
 at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
 at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
 at org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
 at 
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
 at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
 at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
 at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
 at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at org.eclipse.jetty.server.Server.handle(Server.java:539)
 at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
 at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
 at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
 at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian JIRA

[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 15 - Still Unstable

2019-02-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/15/

1 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.ShardSplitTest.test

Error Message:
SPLITSHARD was not successful even after three tries

Stack Trace:
java.lang.AssertionError: SPLITSHARD was not successful even after three tries
at 
__randomizedtesting.SeedInfo.seed([8038F3970DE17F9E:86CCC4DA31D1266]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.trySplit(ShardSplitTest.java:946)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.splitByRouteKeyTest(ShardSplitTest.java:920)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.test(ShardSplitTest.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23623 - Unstable!

2019-02-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23623/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudSearcherWarming.testRepFactor1LeaderStartup

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:343)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:213)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:517)
at 
org.apache.solr.cloud.TestCloudSearcherWarming.tearDown(TestCloudSearcherWarming.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[GitHub] mikemccand commented on a change in pull request #562: Don't create a LeafCollector when the Scorer for the leaf is null

2019-02-04 Thread GitBox
mikemccand commented on a change in pull request #562: Don't create a 
LeafCollector when the Scorer for the leaf is null
URL: https://github.com/apache/lucene-solr/pull/562#discussion_r253665853
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/IndexSearcher.java
 ##
 @@ -638,21 +638,16 @@ protected void search(List leaves, 
Weight weight, Collector c
 // threaded...?  the Collector could be sync'd?
 // always use single thread:
 for (LeafReaderContext ctx : leaves) { // search each subreader
-  final LeafCollector leafCollector;
-  try {
-leafCollector = collector.getLeafCollector(ctx);
-  } catch (CollectionTerminatedException e) {
-// there is no doc of interest in this reader context
-// continue with the following leaf
-continue;
-  }
   BulkScorer scorer = weight.bulkScorer(ctx);
 
 Review comment:
   Should we also update javadocs for `Collector.getLeafCollector` that it will 
(may?) not be called when query can determine there will be no hits in this 
segment?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2019-02-04 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-5007:
---
Attachment: SOLR-5007.patch

> TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
> failing a completely different test.
> 
>
> Key: SOLR-5007
> URL: https://issues.apache.org/jira/browse/SOLR-5007
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-5007.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2019-02-04 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760202#comment-16760202
 ] 

Kevin Risden commented on SOLR-5007:


I think I found the underlying issue. FileContext.getFileContext doesn't close 
the underlying filesystem that gets created. There is a rename method now on 
the Filesystem that gets closed correctly. I am trying to track down when the 
rename method was added to Filesystem class.

 

I need to run all the HDFS related tests to make sure nothing is broken but 
this fixed the IPC thread issue with a few of the tests (TestRecoveryHdfs, 
HdfsDirectoryFactoryTest, HdfsDirectoryTest). 

> TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
> failing a completely different test.
> 
>
> Key: SOLR-5007
> URL: https://issues.apache.org/jira/browse/SOLR-5007
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-5007.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-02-04 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760118#comment-16760118
 ] 

Ankit Jain edited comment on LUCENE-8635 at 2/4/19 7:35 PM:


I have created [pull request|https://github.com/apache/lucene-solr/pull/563] 
with the proposed changes. Though surprisingly, I still see some impact on the 
PKLookup performance. This does not make sense to me, might be my perf run 
setup.

{code:title=wikimedium10m|borderStyle=solid}
TaskQPS baseline  StdDevQPS candidate  StdDev   
 Pct diff
PKLookup  117.45  (2.2%)  108.72  (2.3%)   
-7.4% ( -11% -   -3%)
OrHighNotMed 1094.23  (2.5%) 1057.88  (2.7%)   
-3.3% (  -8% -1%)
OrHighNotLow 1047.30  (1.7%) 1012.91  (2.5%)   
-3.3% (  -7% -1%)
  Fuzzy2   44.10  (2.3%)   42.71  (2.7%)   
-3.2% (  -7% -1%)
OrNotHighLow 1022.67  (2.5%)  992.28  (2.4%)   
-3.0% (  -7% -1%)
BrowseDayOfYearTaxoFacets 7907.19  (2.0%) 7677.99  (2.7%)   
-2.9% (  -7% -1%)
OrNotHighMed  866.37  (1.9%)  843.10  (2.3%)   
-2.7% (  -6% -1%)
 LowTerm 2103.58  (3.5%) 2048.98  (3.6%)   
-2.6% (  -9% -4%)
   BrowseMonthTaxoFacets 7883.86  (2.0%) 7692.48  (2.1%)   
-2.4% (  -6% -1%)
  Fuzzy1   64.44  (1.9%)   62.88  (2.3%)   
-2.4% (  -6% -1%)
   OrNotHighHigh  779.27  (2.0%)  761.04  (2.1%)   
-2.3% (  -6% -1%)
 Respell   55.60  (2.6%)   54.34  (2.3%)   
-2.3% (  -7% -2%)
   OrHighNotHigh  877.28  (2.2%)  858.10  (2.5%)   
-2.2% (  -6% -2%)
   BrowseMonthSSDVFacets   14.85  (7.9%)   14.57 (10.7%)   
-1.9% ( -18% -   18%)
 MedTerm 1984.26  (3.6%) 1947.76  (2.3%)   
-1.8% (  -7% -4%)
  AndHighLow  718.71  (1.5%)  706.06  (1.6%)   
-1.8% (  -4% -1%)
   OrHighLow  523.40  (2.5%)  515.56  (2.4%)   
-1.5% (  -6% -3%)
HighTerm 1381.10  (2.9%) 1360.80  (2.7%)   
-1.5% (  -6% -4%)
   HighTermMonthSort  120.45 (12.3%)  119.00 (16.4%)   
-1.2% ( -26% -   31%)
BrowseDayOfYearSSDVFacets   11.55  (9.7%)   11.45 (10.0%)   
-0.8% ( -18% -   20%)
  AndHighMed  155.15  (2.6%)  154.25  (2.4%)   
-0.6% (  -5% -4%)
   OrHighMed   88.00  (2.5%)   87.85  (2.7%)   
-0.2% (  -5% -5%)
   LowPhrase   80.53  (1.6%)   80.40  (1.4%)   
-0.2% (  -3% -2%)
 AndHighHigh   41.91  (4.2%)   41.86  (2.9%)   
-0.1% (  -6% -7%)
   MedPhrase   46.29  (1.4%)   46.33  (1.5%)
0.1% (  -2% -3%)
  IntNRQ  127.54  (0.4%)  127.76  (0.4%)
0.2% (   0% -1%)
   HighTermDayOfYearSort   48.59  (5.1%)   48.71  (6.0%)
0.2% ( -10% -   12%)
 LowSloppyPhrase   13.04  (4.0%)   13.08  (4.3%)
0.3% (  -7% -8%)
 MedSloppyPhrase   19.48  (2.3%)   19.54  (2.4%)
0.3% (  -4% -5%)
  OrHighHigh   23.60  (3.0%)   23.68  (2.9%)
0.3% (  -5% -6%)
  HighPhrase   20.25  (2.4%)   20.32  (1.8%)
0.3% (  -3% -4%)
HighSloppyPhrase9.29  (3.3%)9.32  (3.2%)
0.4% (  -5% -7%)
 LowSpanNear   25.70  (3.8%)   25.89  (3.9%)
0.7% (  -6% -8%)
 MedSpanNear   30.46  (4.1%)   30.69  (4.3%)
0.7% (  -7% -9%)
HighSpanNear   14.41  (4.3%)   14.60  (4.7%)
1.3% (  -7% -   10%)
Wildcard   70.08 (10.3%)   71.09  (6.1%)
1.4% ( -13% -   19%)
BrowseDateTaxoFacets2.37  (0.2%)2.41  (0.3%)
1.5% (   0% -1%)
 Prefix3   86.71 (11.4%)   89.04  (6.8%)
2.7% ( -13% -   23%)
{code}


was (Author: akjain):
I have created [pull request|https://github.com/apache/lucene-solr/pull/563] 
with the proposed changes. Though surprisingly, I still see some impact on the 
PKLookup performance.

{code:title=wikimedium10m|borderStyle=solid}
TaskQPS baseline  StdDevQPS candidate  StdDev   
 Pct diff
PKLookup  117.45  (2.2%)  108.72  (2.3%)   
-7.4% ( -11% -   -3%)
OrHighNotMed 1094.23  (2.5%) 1057.88  (2.7%)   
-3.3% (  -8% -1%)
OrHighNotLow 1047.30  (1.7%) 

[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-02-04 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760118#comment-16760118
 ] 

Ankit Jain commented on LUCENE-8635:


I have created [pull request|https://github.com/apache/lucene-solr/pull/563] 
with the proposed changes. Though surprisingly, I still see some impact on the 
PKLookup performance.

{code:title=wikimedium10m|borderStyle=solid}
TaskQPS baseline  StdDevQPS candidate  StdDev   
 Pct diff
PKLookup  117.45  (2.2%)  108.72  (2.3%)   
-7.4% ( -11% -   -3%)
OrHighNotMed 1094.23  (2.5%) 1057.88  (2.7%)   
-3.3% (  -8% -1%)
OrHighNotLow 1047.30  (1.7%) 1012.91  (2.5%)   
-3.3% (  -7% -1%)
  Fuzzy2   44.10  (2.3%)   42.71  (2.7%)   
-3.2% (  -7% -1%)
OrNotHighLow 1022.67  (2.5%)  992.28  (2.4%)   
-3.0% (  -7% -1%)
BrowseDayOfYearTaxoFacets 7907.19  (2.0%) 7677.99  (2.7%)   
-2.9% (  -7% -1%)
OrNotHighMed  866.37  (1.9%)  843.10  (2.3%)   
-2.7% (  -6% -1%)
 LowTerm 2103.58  (3.5%) 2048.98  (3.6%)   
-2.6% (  -9% -4%)
   BrowseMonthTaxoFacets 7883.86  (2.0%) 7692.48  (2.1%)   
-2.4% (  -6% -1%)
  Fuzzy1   64.44  (1.9%)   62.88  (2.3%)   
-2.4% (  -6% -1%)
   OrNotHighHigh  779.27  (2.0%)  761.04  (2.1%)   
-2.3% (  -6% -1%)
 Respell   55.60  (2.6%)   54.34  (2.3%)   
-2.3% (  -7% -2%)
   OrHighNotHigh  877.28  (2.2%)  858.10  (2.5%)   
-2.2% (  -6% -2%)
   BrowseMonthSSDVFacets   14.85  (7.9%)   14.57 (10.7%)   
-1.9% ( -18% -   18%)
 MedTerm 1984.26  (3.6%) 1947.76  (2.3%)   
-1.8% (  -7% -4%)
  AndHighLow  718.71  (1.5%)  706.06  (1.6%)   
-1.8% (  -4% -1%)
   OrHighLow  523.40  (2.5%)  515.56  (2.4%)   
-1.5% (  -6% -3%)
HighTerm 1381.10  (2.9%) 1360.80  (2.7%)   
-1.5% (  -6% -4%)
   HighTermMonthSort  120.45 (12.3%)  119.00 (16.4%)   
-1.2% ( -26% -   31%)
BrowseDayOfYearSSDVFacets   11.55  (9.7%)   11.45 (10.0%)   
-0.8% ( -18% -   20%)
  AndHighMed  155.15  (2.6%)  154.25  (2.4%)   
-0.6% (  -5% -4%)
   OrHighMed   88.00  (2.5%)   87.85  (2.7%)   
-0.2% (  -5% -5%)
   LowPhrase   80.53  (1.6%)   80.40  (1.4%)   
-0.2% (  -3% -2%)
 AndHighHigh   41.91  (4.2%)   41.86  (2.9%)   
-0.1% (  -6% -7%)
   MedPhrase   46.29  (1.4%)   46.33  (1.5%)
0.1% (  -2% -3%)
  IntNRQ  127.54  (0.4%)  127.76  (0.4%)
0.2% (   0% -1%)
   HighTermDayOfYearSort   48.59  (5.1%)   48.71  (6.0%)
0.2% ( -10% -   12%)
 LowSloppyPhrase   13.04  (4.0%)   13.08  (4.3%)
0.3% (  -7% -8%)
 MedSloppyPhrase   19.48  (2.3%)   19.54  (2.4%)
0.3% (  -4% -5%)
  OrHighHigh   23.60  (3.0%)   23.68  (2.9%)
0.3% (  -5% -6%)
  HighPhrase   20.25  (2.4%)   20.32  (1.8%)
0.3% (  -3% -4%)
HighSloppyPhrase9.29  (3.3%)9.32  (3.2%)
0.4% (  -5% -7%)
 LowSpanNear   25.70  (3.8%)   25.89  (3.9%)
0.7% (  -6% -8%)
 MedSpanNear   30.46  (4.1%)   30.69  (4.3%)
0.7% (  -7% -9%)
HighSpanNear   14.41  (4.3%)   14.60  (4.7%)
1.3% (  -7% -   10%)
Wildcard   70.08 (10.3%)   71.09  (6.1%)
1.4% ( -13% -   19%)
BrowseDateTaxoFacets2.37  (0.2%)2.41  (0.3%)
1.5% (   0% -1%)
 Prefix3   86.71 (11.4%)   89.04  (6.8%)
2.7% ( -13% -   23%)
{code}

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, fst-offheap-rev.patch, 
> offheap.patch, optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the 

[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk-9) - Build # 33 - Unstable!

2019-02-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/33/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, SolrCore, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:516)  
at org.apache.solr.core.SolrCore.(SolrCore.java:967)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:882)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:509)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:351) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:424) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1193)
  at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
 at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:367)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:975)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:882)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1062)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:882)  at 

[GitHub] jainankitk opened a new pull request #563: Support for moving FST offheap except PK

2019-02-04 Thread GitBox
jainankitk opened a new pull request #563: Support for moving FST offheap 
except PK
URL: https://github.com/apache/lucene-solr/pull/563
 
 
   The change adds support for initializing FST offheap using mapped files 
during index open. To avoid impact on PKLookup performance, this change 
initializes FST offheap only if docCount != sumDocFreq (implying field is not 
PK) and indexInput isinstanceof ByteBufferIndexInput (implying MMapDirectory is 
being used). More details can be found in the issue below:
   
   https://issues.apache.org/jira/browse/LUCENE-8635


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 448 - Still Unstable

2019-02-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/448/

2 tests failed.
FAILED:  org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR

Error Message:
Path must not end with / character

Stack Trace:
java.lang.IllegalArgumentException: Path must not end with / character
at 
__randomizedtesting.SeedInfo.seed([5D803D4699663918:7180780E7E65EFF]:0)
at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:58)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1523)
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$getChildren$4(SolrZkClient.java:346)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:71)
at 
org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:346)
at 
org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR(LIROnShardRestartTest.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Updated] (SOLR-13216) Trouble restoring a collection

2019-02-04 Thread Roy Perkins (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roy Perkins updated SOLR-13216:
---
Description: 
I'm having a weird issue when attempting to restore a collection from our prod 
cluster to our staging cluster.  The restore seems to be moving along normally, 
and then right at the end, the data gets dumped altogether.

Below is the command I use to restore:

{{curl -s 
"[http://localhost:8983/solr/admin/collections?action=RESTORE=slprod-02-04-2019=/mnt/solr_backups/slprod=slprod-02-04-2019=1=1=1000|http://localhost:8983/solr/admin/collections?action=RESTORE=slhv-02-04-2019=/mnt/solr_backups/slhv=slhv-02-04-2019=1=1=1000]"}}

Below are relevant messages in the logs: 

{{2019-02-04 12:51:57.465 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is2_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.524 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdt to restore directory}}
 {{2019-02-04 12:51:57.590 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdx to restore directory}}
 {{2019-02-04 12:51:57.642 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fnm to restore directory}}
 {{2019-02-04 12:51:57.707 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.si to restore directory}}
 {{2019-02-04 12:51:57.760 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:57.812 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:57.878 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.936 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdt to restore directory}}
 {{2019-02-04 12:51:58.003 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdx to restore directory}}
 {{2019-02-04 12:51:58.057 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fnm to restore directory}}
 {{2019-02-04 12:51:58.124 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvd to restore directory}}
 {{2019-02-04 12:51:58.191 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvm to restore directory}}
 {{2019-02-04 12:51:58.244 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.si to restore directory}}
 {{2019-02-04 12:51:58.298 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:58.350 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.pos to restore directory}}
 {{2019-02-04 12:51:58.402 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:58.467 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:58.520 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 

[jira] [Updated] (SOLR-13216) Trouble restoring a collection

2019-02-04 Thread Roy Perkins (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roy Perkins updated SOLR-13216:
---
Description: 
I'm having a weird issue when attempting to restore a collection from our prod 
cluster to our staging cluster.  The restore seems to be moving along normally, 
and then right at the end, the data gets dumped altogether.

Below is the command I use to restore:

{{curl -s 
"[http://localhost:8983/solr/admin/collections?action=RESTORE=slhv-02-04-2019=/mnt/solr_backups/slhv=slhv-02-04-2019=1=1=1000]"}}

Below are relevant messages in the logs: 

{{2019-02-04 12:51:57.465 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is2_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.524 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdt to restore directory}}
 {{2019-02-04 12:51:57.590 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdx to restore directory}}
 {{2019-02-04 12:51:57.642 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fnm to restore directory}}
 {{2019-02-04 12:51:57.707 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.si to restore directory}}
 {{2019-02-04 12:51:57.760 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:57.812 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:57.878 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.936 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdt to restore directory}}
 {{2019-02-04 12:51:58.003 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdx to restore directory}}
 {{2019-02-04 12:51:58.057 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fnm to restore directory}}
 {{2019-02-04 12:51:58.124 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvd to restore directory}}
 {{2019-02-04 12:51:58.191 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvm to restore directory}}
 {{2019-02-04 12:51:58.244 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.si to restore directory}}
 {{2019-02-04 12:51:58.298 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:58.350 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.pos to restore directory}}
 {{2019-02-04 12:51:58.402 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:58.467 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:58.520 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
segments_3x02 to restore directory}}
 {{2019-02-04 12:51:58.573 INFO 

[jira] [Updated] (SOLR-13216) Trouble restoring a collection

2019-02-04 Thread Roy Perkins (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roy Perkins updated SOLR-13216:
---
Description: 
I'm having a weird issue when attempting to restore a collection from our prod 
cluster to our staging cluster.  The restore seems to be moving along normally, 
and then right at the end, the data gets dumped altogether.

Below is the command I use to restore:

{{http://localhost:8983/solr/admin/collections?action=RESTORE=slhv-02-04-2019=/mnt/solr_backups/slhv=slhv-02-04-2019=1=1=1000}}

Below are relevant messages in the logs: 

{{2019-02-04 12:51:57.465 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is2_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.524 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdt to restore directory}}
 {{2019-02-04 12:51:57.590 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdx to restore directory}}
 {{2019-02-04 12:51:57.642 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fnm to restore directory}}
 {{2019-02-04 12:51:57.707 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.si to restore directory}}
 {{2019-02-04 12:51:57.760 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:57.812 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:57.878 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.936 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdt to restore directory}}
 {{2019-02-04 12:51:58.003 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdx to restore directory}}
 {{2019-02-04 12:51:58.057 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fnm to restore directory}}
 {{2019-02-04 12:51:58.124 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvd to restore directory}}
 {{2019-02-04 12:51:58.191 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvm to restore directory}}
 {{2019-02-04 12:51:58.244 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.si to restore directory}}
 {{2019-02-04 12:51:58.298 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:58.350 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.pos to restore directory}}
 {{2019-02-04 12:51:58.402 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:58.467 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:58.520 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
segments_3x02 to restore directory}}
 {{2019-02-04 12:51:58.573 INFO 

[jira] [Updated] (SOLR-13216) Trouble restoring a collection

2019-02-04 Thread Roy Perkins (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roy Perkins updated SOLR-13216:
---
Description: 
I'm having a weird issue when attempting to restore a collection from our prod 
cluster to our staging cluster.  The restore seems to be moving along normally, 
and then right at the end, the data gets dumped altogether.  Below are relevant 
messages in the logs:

 

{{2019-02-04 12:51:57.465 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is2_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.524 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdt to restore directory}}
 {{2019-02-04 12:51:57.590 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdx to restore directory}}
 {{2019-02-04 12:51:57.642 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fnm to restore directory}}
 {{2019-02-04 12:51:57.707 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.si to restore directory}}
 {{2019-02-04 12:51:57.760 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:57.812 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:57.878 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.936 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdt to restore directory}}
 {{2019-02-04 12:51:58.003 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdx to restore directory}}
 {{2019-02-04 12:51:58.057 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fnm to restore directory}}
 {{2019-02-04 12:51:58.124 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvd to restore directory}}
 {{2019-02-04 12:51:58.191 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvm to restore directory}}
 {{2019-02-04 12:51:58.244 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.si to restore directory}}
 {{2019-02-04 12:51:58.298 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:58.350 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.pos to restore directory}}
 {{2019-02-04 12:51:58.402 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:58.467 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:58.520 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
segments_3x02 to restore directory}}
 {{2019-02-04 12:51:58.573 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.c.SolrCore Updating index 
properties... 

[jira] [Updated] (SOLR-13216) Trouble restoring a collection

2019-02-04 Thread Roy Perkins (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roy Perkins updated SOLR-13216:
---
Description: 
I'm having a weird issue when attempting to restore a collection from our prod 
cluster to our staging cluster.  The restore seems to be moving along normally, 
and then right at the end, the data gets dumped altogether.  Below are relevant 
messages in the logs:

 

{{2019-02-04 12:51:57.465 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is2_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.524 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdt to restore directory}}
 {{2019-02-04 12:51:57.590 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdx to restore directory}}
 {{2019-02-04 12:51:57.642 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fnm to restore directory}}
 {{2019-02-04 12:51:57.707 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.si to restore directory}}
 {{2019-02-04 12:51:57.760 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:57.812 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:57.878 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:57.936 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdt to restore directory}}
 {{2019-02-04 12:51:58.003 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdx to restore directory}}
 {{2019-02-04 12:51:58.057 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fnm to restore directory}}
 {{2019-02-04 12:51:58.124 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvd to restore directory}}
 {{2019-02-04 12:51:58.191 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvm to restore directory}}
 {{2019-02-04 12:51:58.244 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.si to restore directory}}
 {{2019-02-04 12:51:58.298 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.doc to restore directory}}
 {{2019-02-04 12:51:58.350 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.pos to restore directory}}
 {{2019-02-04 12:51:58.402 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tim to restore directory}}
 {{2019-02-04 12:51:58.467 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tip to restore directory}}
 {{2019-02-04 12:51:58.520 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
segments_3x02 to restore directory}}
 {{2019-02-04 12:51:58.573 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.domain:8983_solr 
100019616170409755642 RESTORECORE) [ ] o.a.s.c.SolrCore Updating index 
properties... 

[jira] [Commented] (SOLR-13210) TriLevelCompositeIdRoutingTest makes no sense -- can never fail

2019-02-04 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760070#comment-16760070
 ] 

Shalin Shekhar Mangar commented on SOLR-13210:
--

I myself don't know because I committed it years ago (and someone else wrote 
the original test). But looking at the logic of extracting the key (throwing 
away everything after the last `!`), I think the idMap is supposed to have 
`app!user`. The test is making wrong assumptions about the shard distribution.

I think what we need to test here is that:
# Given a fixed number of apps and users, all data for the same {{app!user}} 
prefix go to the same shard in the absence of masks
# Choose a mask such that the {{app}} spans multiple shards but verify that any 
given {{app!user}} prefix goes to a single/same shard only

> TriLevelCompositeIdRoutingTest makes no sense -- can never fail
> ---
>
> Key: SOLR-13210
> URL: https://issues.apache.org/jira/browse/SOLR-13210
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13210_demonstrate_broken_test.patch
>
>
> i recently fixed tweaked TriLevelCompositeIdRoutingTest to lower the 
> node/shard count on TEST_NIGHTLY because it was constantly causing an OOM.
> While skimming this test i realized that (other then the OOM, or other 
> catastrophic failure in solr) it was garunteed to never fail, rgardless of 
> what bugs might exist in solr when routing an update/query:
> * it doesn't sanity check that any docs are returned from any query -- so if 
> commit does nothing and it gets no results from each of the shard queries, it 
> will still pass
> * the {{getKey()}} method -- which throws away anything after the last "!" in 
> a String -- is called redundently on it's own output to populate an {{idMap}} 
> ... but not before the first result is used do to acontainsKey assertion on 
> that same {{idMap}}
> ** ie: if {{app42/7!user33!doc1234}} is a uniqueKey value, then 
> {{app42/7!user33}} is what the assert !containsKey checks the Map for, but  
> {{app42/7}} is what gets put in the Map



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13216) Trouble restoring a collection

2019-02-04 Thread Roy Perkins (JIRA)
Roy Perkins created SOLR-13216:
--

 Summary: Trouble restoring a collection
 Key: SOLR-13216
 URL: https://issues.apache.org/jira/browse/SOLR-13216
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Backup/Restore
Affects Versions: 6.6.5
Reporter: Roy Perkins


I'm having a weird issue when attempting to restore a collection from our prod 
cluster to our staging cluster.  The restore seems to be moving along normally, 
and then right at the end, the data gets dumped altogether.  Below are relevant 
messages in the logs:

 

{{2019-02-04 12:51:57.465 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is2_Lucene50_0.tip to restore directory}}
{{2019-02-04 12:51:57.524 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdt to restore directory}}
{{2019-02-04 12:51:57.590 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fdx to restore directory}}
{{2019-02-04 12:51:57.642 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.fnm to restore directory}}
{{2019-02-04 12:51:57.707 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3.si to restore directory}}
{{2019-02-04 12:51:57.760 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.doc to restore directory}}
{{2019-02-04 12:51:57.812 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tim to restore directory}}
{{2019-02-04 12:51:57.878 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is3_Lucene50_0.tip to restore directory}}
{{2019-02-04 12:51:57.936 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdt to restore directory}}
{{2019-02-04 12:51:58.003 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fdx to restore directory}}
{{2019-02-04 12:51:58.057 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.fnm to restore directory}}
{{2019-02-04 12:51:58.124 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvd to restore directory}}
{{2019-02-04 12:51:58.191 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.nvm to restore directory}}
{{2019-02-04 12:51:58.244 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4.si to restore directory}}
{{2019-02-04 12:51:58.298 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.doc to restore directory}}
{{2019-02-04 12:51:58.350 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.pos to restore directory}}
{{2019-02-04 12:51:58.402 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tim to restore directory}}
{{2019-02-04 12:51:58.467 INFO 
(parallelCoreAdminExecutor-5-thread-2-processing-n:solrmcstg11.dc3.homes.com:8983_solr
 100019616170409755642 RESTORECORE) [ ] o.a.s.h.RestoreCore Copying file 
_406is4_Lucene50_0.tip to restore directory}}
{{2019-02-04 12:51:58.520 INFO 

[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2019-02-04 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760043#comment-16760043
 ] 

Kevin Risden commented on SOLR-5007:


[~hossman] - sorry you are right. I didn't see the commits get moved around. I 
knew BadHdfsThreadsFilter existed but didn't grep for SOLR-5007. I can take a 
look and see if this is actually resolved with SOLR-5007 "fixes/workarounds" 
removed.

> TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
> failing a completely different test.
> 
>
> Key: SOLR-5007
> URL: https://issues.apache.org/jira/browse/SOLR-5007
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2019-02-04 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760022#comment-16760022
 ] 

Hoss Man edited comment on SOLR-5007 at 2/4/19 5:07 PM:


[~krisden] - the way to check if this is still an issue would be to revert -the 
previous  "workaround"- SOLR-7289 commit that causes these leaked threads to be 
ignored in HDFS related tests if they still exist.

See {{public class BadHdfsThreadsFilter implements ThreadFilter}} ant the logic 
it contains, along with the various comments and linked issues.

we shouldn't be resolving this issue unles/untill we can confidently remove the 
logic that "ignores' those leaked threads.

*EDIT* Just realized that the commit mentioned in the comments above wasn't the 
commit that added HdfsThreadLeakTest, so it's not obvious it was a workaround 
for this issue unless you grep the code for "SOLR-5007"


was (Author: hossman):
[~krisden] - the way to check if this is still an issue would be to revert the 
previous  "workaround" commit that causes these leaked threads to be ignored in 
HDFS related tests if they still exist.

See {{public class BadHdfsThreadsFilter implements ThreadFilter}} ant the logic 
it contains, along with the various comments and linked issues.

we shouldn't be resolving this issue unles/untill we can confidently remove the 
logic that "ignores' those leaked threads.

> TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
> failing a completely different test.
> 
>
> Key: SOLR-5007
> URL: https://issues.apache.org/jira/browse/SOLR-5007
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2019-02-04 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-5007:

  Assignee: Kevin Risden  (was: Mark Miller)

[~krisden] - the way to check if this is still an issue would be to revert the 
previous  "workaround" commit that causes these leaked threads to be ignored in 
HDFS related tests if they still exist.

See {{public class BadHdfsThreadsFilter implements ThreadFilter}} ant the logic 
it contains, along with the various comments and linked issues.

we shouldn't be resolving this issue unles/untill we can confidently remove the 
logic that "ignores' those leaked threads.

> TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
> failing a completely different test.
> 
>
> Key: SOLR-5007
> URL: https://issues.apache.org/jira/browse/SOLR-5007
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12276) Admin UI - Convert from "AngularJS" to "Angular"

2019-02-04 Thread James Dyer (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760014#comment-16760014
 ] 

James Dyer commented on SOLR-12276:
---

[~jdbranham] I am not working on this.  At a quick glance, your project looks 
really nice.

> Admin UI - Convert from "AngularJS" to "Angular"
> 
>
> Key: SOLR-12276
> URL: https://issues.apache.org/jira/browse/SOLR-12276
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: James Dyer
>Priority: Minor
>  Labels: Angular, AngularJS, angular-migration
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> With SOLR-12196 it was noted the current Solr Admin UI runs AngularJS (1.x), 
> which is to be End-of-Life later this year.  Various options were proposed 
> for what to do next.  One option is to keep the existing functionality but 
> migrate to a newer UI framework.  This ticket is to migrate the existing UI 
> to Angular (2+).
> See [this readme 
> file|https://github.com/jdyer1/lucene-solr/tree/feature/angular-conversion-solr-admin/solr/webapp].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13214) non ok status: 414, message:Request-URI Too Long

2019-02-04 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760009#comment-16760009
 ] 

Munendra S N commented on SOLR-13214:
-

[~tushar.choudhary]

Reason for 414 status Code - 
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/414
*QueryRequest* supports passing HTTP method to be used. By default, it is *GET*
So, code would look something like this
{code:java}
new QueryRequest(solrParams, SolrRequest.METHOD.POST);
{code}

Also, I think this is more like question than an actual issue (*I may be 
wrong*). If that is the case it is better to case ask in [Solr-User list 
|http://lucene.apache.org/solr/community.html#solr-user-list-solr-userluceneapacheorg].
 You would get much quicker reply too.

> non ok status: 414, message:Request-URI Too Long
> 
>
> Key: SOLR-13214
> URL: https://issues.apache.org/jira/browse/SOLR-13214
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Reporter: Tushar Choudhary
>Priority: Blocker
>  Labels: windows
>
> Getting error from solr 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at "URL" :non ok status: 414, message:Request-URI Too Long
> Can anyone let me know why i am getting this exception and possible solution 
> to overcome this problem.we are using solrCloud and zookeeper.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13214) non ok status: 414, message:Request-URI Too Long

2019-02-04 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760009#comment-16760009
 ] 

Munendra S N edited comment on SOLR-13214 at 2/4/19 4:41 PM:
-

[~tushar.choudhary]

Reason for 414 status Code - 
[https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/414]
 *QueryRequest* supports passing HTTP method to be used. By default, it is *GET*
 So, code would look something like this
{code:java}
new QueryRequest(solrParams, SolrRequest.METHOD.POST);
{code}
Also, I think this is more like question than an actual issue (*I may be 
wrong*). If that is the case it is better to ask in [Solr-User list 
|http://lucene.apache.org/solr/community.html#solr-user-list-solr-userluceneapacheorg].
 You would get much quicker reply too.


was (Author: munendrasn):
[~tushar.choudhary]

Reason for 414 status Code - 
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/414
*QueryRequest* supports passing HTTP method to be used. By default, it is *GET*
So, code would look something like this
{code:java}
new QueryRequest(solrParams, SolrRequest.METHOD.POST);
{code}

Also, I think this is more like question than an actual issue (*I may be 
wrong*). If that is the case it is better to case ask in [Solr-User list 
|http://lucene.apache.org/solr/community.html#solr-user-list-solr-userluceneapacheorg].
 You would get much quicker reply too.

> non ok status: 414, message:Request-URI Too Long
> 
>
> Key: SOLR-13214
> URL: https://issues.apache.org/jira/browse/SOLR-13214
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Reporter: Tushar Choudhary
>Priority: Blocker
>  Labels: windows
>
> Getting error from solr 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at "URL" :non ok status: 414, message:Request-URI Too Long
> Can anyone let me know why i am getting this exception and possible solution 
> to overcome this problem.we are using solrCloud and zookeeper.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12330) Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) either reported too little and even might be ignored

2019-02-04 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12330:

Attachment: SOLR-12330.patch

> Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) 
> either reported too little and even might be ignored 
> ---
>
> Key: SOLR-12330
> URL: https://issues.apache.org/jira/browse/SOLR-12330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.3
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12330-combined.patch, SOLR-12330.patch, 
> SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch, 
> SOLR-12330.patch
>
>
> Just encounter such weird behaviour, will recheck and followup. 
>  {{"filter":["\\{!v=$bogus}"]}} responds back with just NPE which makes 
> impossible to guess the reason.
> -It might be even worse, since- {{"filter":[\\{"param":"bogus"}]}} seems like 
> just silently ignored. Turns out it's ok see SOLR-9682



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12330) Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) either reported too little and even might be ignored

2019-02-04 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759948#comment-16759948
 ] 

Munendra S N commented on SOLR-12330:
-

 [^SOLR-12330.patch] 
[~mkhludnev]
I have made the changes. Also, added error location wherever possible

> Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) 
> either reported too little and even might be ignored 
> ---
>
> Key: SOLR-12330
> URL: https://issues.apache.org/jira/browse/SOLR-12330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.3
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12330-combined.patch, SOLR-12330.patch, 
> SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch, 
> SOLR-12330.patch
>
>
> Just encounter such weird behaviour, will recheck and followup. 
>  {{"filter":["\\{!v=$bogus}"]}} responds back with just NPE which makes 
> impossible to guess the reason.
> -It might be even worse, since- {{"filter":[\\{"param":"bogus"}]}} seems like 
> just silently ignored. Turns out it's ok see SOLR-9682



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-04 Thread Markus Jelsma (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759947#comment-16759947
 ] 

Markus Jelsma commented on SOLR-11763:
--

The patch goes well for all files on master and 7.6 except 
lucene/ivy-versions.properties. I tried master and 7.6.

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-04 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759938#comment-16759938
 ] 

Kevin Risden commented on SOLR-11763:
-

[~markus17] thanks for the quick check. I'll have to take a look. Not sure why 
the patch apply would fail. I don't think this will apply to the 7.x lines. I 
was looking at 8.x and master.

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-04 Thread Markus Jelsma (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759934#comment-16759934
 ] 

Markus Jelsma commented on SOLR-11763:
--

Hello [~krisden], the patch fails for 7.6 and master:

checking file lucene/ivy-versions.properties
Hunk #1 FAILED at 24.
Hunk #2 succeeded at 33 (offset 1 line).
Hunk #4 succeeded at 115 with fuzz 2 (offset -2 lines).
Hunk #5 succeeded at 238 with fuzz 2 (offset 15 lines).
1 out of 5 hunks FAILED


> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-04 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759928#comment-16759928
 ] 

Kevin Risden commented on SOLR-11763:
-

[~markus17] [~elyograg] [~varunthacker] [~thetaphi] - Any thoughts on the 
latest patch? 

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] iverase commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
iverase commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253495525
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
+  private final int bytesPerDim;
+  // number of bytes to be sorted: bytesPerDim + Integer.BYTES
+  private final int bytesSorted;
+  //data dimensions size
+  private final int packedBytesLength;
+  //flag to when we are moving to sort on heap
+  private final int maxPointsSortedOffHeap;
+  //reusable buffer
+  private final byte[] offlineBuffer;
+  //holder for partition points
+  private final int[] partitionBucket;
+  //holder for partition bytes
+  private final byte[] partitionBytes;
+  //re-usable on-heap selector
+  private final HeapSelector heapSelector;
+  // scratch object to move bytes around
+  private final BytesRef bytesRef1 = new BytesRef();
+  // scratch object to move bytes around
+  private final BytesRef bytesRef2 = new BytesRef();
+  //Directory to create new Offline writer
+  private final Directory tempDir;
+  // prefix for temp files
+  private final String tempFileNamePrefix;
+
+
+
+  /**
+   * Sole constructor.
+   */
+  public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortedOffHeap, Directory tempDir, String tempFileNamePrefix) {
+this.bytesPerDim = bytesPerDim;
+this.packedBytesLength = numDim * bytesPerDim;
+this.bytesSorted = bytesPerDim + Integer.BYTES;
+this.maxPointsSortedOffHeap = maxPointsSortedOffHeap;
+this.offlineBuffer = new byte[maxPointsSortedOffHeap * (packedBytesLength 
+ Integer.BYTES)];
+this.partitionBucket = new int[bytesSorted];
+this.partitionBytes =  new byte[bytesSorted];
+this.histogram = new long[bytesSorted][HISTOGRAM_SIZE];
+this.bytesRef1.length = numDim * bytesPerDim;
+this.heapSelector = new HeapSelector(numDim, bytesPerDim);
+this.tempDir = tempDir;
+this.tempFileNamePrefix = tempFileNamePrefix;
+  }
+
+  /**
+   * Method to partition the input data. It returns the value of the dimension 
where
+   * the split happens.
+   */
+  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+checkArgs(from, to, partitionPoint);
+
+//If we are on heap then we just select on heap
+if (points instanceof HeapPointWriter) {
+  return heapSelect((HeapPointWriter) points, left, right, dim, 
Math.toIntExact(from), Math.toIntExact(to),  Math.toIntExact(partitionPoint), 
0);
+}
+
+//reset histogram
+for (int i = 0; i < bytesSorted; i++) {
+  Arrays.fill(histogram[i], 0);
+}
+OfflinePointWriter offlinePointWriter = (OfflinePointWriter) points;
+
+//find common prefix, it does already set histogram values if needed
+int commonPrefix = findCommonPrefix(offlinePointWriter, from, to, dim);
+
+//if all equals we just partition the data
+if (commonPrefix ==  bytesSorted) {
+  return partition(offlinePointWriter, left, right, from, to, 
partitionPoint, dim, null, commonPrefix - 1, partitionPoint);
+}
+//let's rock'n'roll
+return buildHistogramAndPartition(offlinePointWriter, null, left, right, 
from, to, partitionPoint, 0, commonPrefix, dim,0, 0);
+  }
+
+  void checkArgs(long from, long to, long middle) {
+if (middle < from) {
+  throw new IllegalArgumentException("middle must be >= from");
+}
+if (middle >= to) {
+  throw new 

[jira] [Commented] (SOLR-13075) Harden SaslZkACLProviderTest.

2019-02-04 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759865#comment-16759865
 ] 

Kevin Risden commented on SOLR-13075:
-

SOLR-7183 and SOLR-8544 are two related JIRAs about SaslZkACLProviderTest 
failing.

> Harden SaslZkACLProviderTest.
> -
>
> Key: SOLR-13075
> URL: https://issues.apache.org/jira/browse/SOLR-13075
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Erick Erickson
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-11) - Build # 130 - Unstable!

2019-02-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/130/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest.testRandomNRT

Error Message:
Captured an uncaught exception in thread: Thread[id=84, name=Thread-66, 
state=RUNNABLE, group=TGRP-AnalyzingInfixSuggesterTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=84, name=Thread-66, state=RUNNABLE, 
group=TGRP-AnalyzingInfixSuggesterTest]
at 
__randomizedtesting.SeedInfo.seed([1980F4ABA01927BE:BDAEFA16F8C6FB02]:0)
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([1980F4ABA01927BE]:0)
at 
java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1896)
at java.base/java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2061)
at java.base/java.util.HashMap.putVal(HashMap.java:633)
at java.base/java.util.HashMap.putIfAbsent(HashMap.java:1057)
at 
org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:302)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:832)
at 
org.apache.lucene.search.BooleanWeight.optionalBulkScorer(BooleanWeight.java:197)
at 
org.apache.lucene.search.BooleanWeight.booleanScorer(BooleanWeight.java:254)
at 
org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:328)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:834)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
at 
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:660)
at 
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:468)
at 
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest$LookupThread.run(AnalyzingInfixSuggesterTest.java:533)




Build Log:
[...truncated 10871 lines...]
   [junit4] Suite: 
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest
   [junit4]   2> ?? ??,  ?:??:?? ? 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[Thread-66,5,TGRP-AnalyzingInfixSuggesterTest]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([1980F4ABA01927BE]:0)
   [junit4]   2>at 
java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1896)
   [junit4]   2>at 
java.base/java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2061)
   [junit4]   2>at java.base/java.util.HashMap.putVal(HashMap.java:633)
   [junit4]   2>at 
java.base/java.util.HashMap.putIfAbsent(HashMap.java:1057)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:302)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:832)
   [junit4]   2>at 
org.apache.lucene.search.BooleanWeight.optionalBulkScorer(BooleanWeight.java:197)
   [junit4]   2>at 
org.apache.lucene.search.BooleanWeight.booleanScorer(BooleanWeight.java:254)
   [junit4]   2>at 
org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:328)
   [junit4]   2>at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:834)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649)
   [junit4]   2>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
   [junit4]   2>at 
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:660)
   [junit4]   2>at 
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:468)
   [junit4]   2>at 
org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest$LookupThread.run(AnalyzingInfixSuggesterTest.java:533)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=AnalyzingInfixSuggesterTest -Dtests.method=testRandomNRT 
-Dtests.seed=1980F4ABA01927BE -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=ar-OM -Dtests.timezone=Canada/Eastern -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   27.6s J0 | AnalyzingInfixSuggesterTest.testRandomNRT <<<
   [junit4]> Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=84, name=Thread-66, state=RUNNABLE, 
group=TGRP-AnalyzingInfixSuggesterTest]
   [junit4]>at 

Re: [GitHub] msokolov opened a new pull request #562: Don't create a LeafCollector when the Scorer for the leaf is null

2019-02-04 Thread Michael Sokolov
This PR proposes a small change a co-worker found. We can avoid creating a
leaf collectors for a leaf that matches no terms, which we can tell if the
scorer for it is null. One test was relying on the exact sequence of
collectors, enforcing that every one was created with no gaps in their
sequence, so this PR also cleans up that test, making the same assertions
without the need for that assumption.

-Mike Sokolov

On Mon, Feb 4, 2019 at 8:12 AM GitBox  wrote:

> msokolov opened a new pull request #562: Don't create a LeafCollector when
> the Scorer for the leaf is null
> URL: https://github.com/apache/lucene-solr/pull/562
>
>
>
>
> 
> This is an automated message from the Apache Git Service.
> To respond to the message, please log on GitHub and use the
> URL above to go to the specific comment.
>
> For queries about this service, please contact Infrastructure at:
> us...@infra.apache.org
>
>
> With regards,
> Apache Git Services
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[GitHub] msokolov opened a new pull request #562: Don't create a LeafCollector when the Scorer for the leaf is null

2019-02-04 Thread GitBox
msokolov opened a new pull request #562: Don't create a LeafCollector when the 
Scorer for the leaf is null
URL: https://github.com/apache/lucene-solr/pull/562
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] moshebla commented on issue #549: WIP:SOLR-13129

2019-02-04 Thread GitBox
moshebla commented on issue #549: WIP:SOLR-13129
URL: https://github.com/apache/lucene-solr/pull/549#issuecomment-460238334
 
 
   I tried to address all requested changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8655) No possibility to access to the underlying "valueSource" of a FunctionScoreQuery

2019-02-04 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759796#comment-16759796
 ] 

Lucene/Solr QA commented on LUCENE-8655:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} queries in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  5m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8655 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957454/LUCENE-8655.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 49dc7a9 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/161/testReport/ |
| modules | C: lucene lucene/queries U: lucene |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/161/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> No possibility to access to the underlying "valueSource" of a 
> FunctionScoreQuery 
> -
>
> Key: LUCENE-8655
> URL: https://issues.apache.org/jira/browse/LUCENE-8655
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.6
>Reporter: Gérald Quaire
>Priority: Major
>  Labels: patch
> Attachments: LUCENE-8655.patch, LUCENE-8655.patch
>
>
> After LUCENE-8099, the "BoostedQuery" is deprecated by the use of the 
> "FunctionScoreQuery". With the BoostedQuery, it was possible to access at its 
> underlying "valueSource". But it is not the case with the class 
> "FunctionScoreQuery". It has got only a getter for the wrapped query,  
> For development of specific parsers, it would be necessary to access the 
> valueSource of a "FunctionScoreQuery". I suggest to add a new getter into the 
> class "FunctionScoreQuery" like below:
> {code:java}
>  /**
>    * @return the wrapped Query
>    */
>   public Query getWrappedQuery() {
>     return in;
>   }
>  /**
>    * @return the a source of scores
>    */
>   public DoubleValuesSource getValueSource() {
>     return source;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1250 - Failure

2019-02-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1250/

No tests ran.

Build Log:
[...truncated 23440 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2479 links (2021 relative) to 3245 anchors in 248 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


[jira] [Comment Edited] (SOLR-12743) Memory leak introduced in Solr 7.3.0

2019-02-04 Thread Markus Jelsma (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759790#comment-16759790
 ] 

Markus Jelsma edited comment on SOLR-12743 at 2/4/19 11:37 AM:
---

Hello all,

Because i can only reproduce it on production, i only have a limited number of 
tries per day, it takes over an hour to test a minor change and more when i 
need to revert. Here are some new notes:

* it doesn't "appear" to be caused by the metrics part, i took out everything 
inside initializeMetrics(), but the leak persisted;
* i swapped FastLRU for LFU cache, otherwise same settings, the node ran OOM 
within minutes even before the commit got issued;
* no idea what happened, but because Solr can run OOM for no clear reason, 
restarted and tried again, *this time the otherwise leaking reference is 
collected as it should*!

So i finally see a "stable" 7.6 with LFUCache instead of FastLRUCache. To be 
clear, FastLRU does work without leaking, but only with a zero autoWarmCount.

I have no idea what is going on with the warming, the warming code is almost 
identical and i can't see how a SolrIndexSearcher instance would leak with 
FastLRU, but not with LFU. The CacheRegenerator is not leaking the reference, 
nor the calling code in SolrCore seems to be the problem.

I'll keep this single node on 7.6 for now and keep an eye on it.

Thanks!



was (Author: markus17):
Hello all,

Because i can only reproduce it on production, i only have a limited number of 
tries per day, it takes over an hour to test a minor change and more when i 
need to revert. Here are some new notes:

* it doesn't "appear" to be caused by the metrics part, i took out everything 
inside initializeMetrics(), but the leak persisted;
* i swapped FastLRU for LFU cache, otherwise same settings, the node ran OOM 
within minutes even before the commit got issued;
* no idea what happened, but because Solr can run OOM for no clear reason, 
restarted and tried again, this time the otherwise leaking reference is 
collected as it should!

So i finally see a "stable" 7.6 with LFUCache instead of FastLRUCache. To be 
clear, FastLRU does work without leaking, but only with a zero autoWarmCount.

I have no idea what is going on with the warming, the warming code is almost 
identical and i can't see how a SolrIndexSearcher instance would leak with 
FastLRU, but not with LFU. The CacheRegenerator is not leaking the reference, 
nor the calling code in SolrCore seems to be the problem.

I'll keep this single node on 7.6 for now and keep an eye on it.

Thanks!


> Memory leak introduced in Solr 7.3.0
> 
>
> Key: SOLR-12743
> URL: https://issues.apache.org/jira/browse/SOLR-12743
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3, 7.3.1, 7.4
>Reporter: Tomás Fernández Löbbe
>Priority: Critical
> Attachments: SOLR-12743.patch
>
>
> Reported initially by [~markus17]([1], [2]), but other users have had the 
> same issue [3]. Some of the key parts:
> {noformat}
> Some facts:
> * problem started after upgrading from 7.2.1 to 7.3.0;
> * it occurs only in our main text search collection, all other collections 
> are unaffected;
> * despite what i said earlier, it is so far unreproducible outside 
> production, even when mimicking production as good as we can;
> * SortedIntDocSet instances and ConcurrentLRUCache$CacheEntry instances are 
> both leaked on commit;
> * filterCache is enabled using FastLRUCache;
> * filter queries are simple field:value using strings, and three filter query 
> for time range using [NOW/DAY TO NOW+1DAY/DAY] syntax for 'today', 'last 
> week' and 'last month', but rarely used;
> * reloading the core manually frees OldGen;
> * custom URP's don't cause the problem, disabling them doesn't solve it;
> * the collection uses custom extensions for QueryComponent and 
> QueryElevationComponent, ExtendedDismaxQParser and MoreLikeThisQParser, a 
> whole bunch of TokenFilters, and several DocTransformers and due it being 
> only reproducible on production, i really cannot switch these back to 
> Solr/Lucene versions;
> * useFilterForSortedQuery is/was not defined in schema so it was default 
> (true?), SOLR-11769 could be the culprit, i disabled it just now only for the 
> node running 7.4.0, rest of collection runs 7.2.1;
> {noformat}
> {noformat}
> You were right, it was leaking exactly one SolrIndexSearcher instance on each 
> commit. 
> {noformat}
> And from Björn Häuser ([3]):
> {noformat}
> Problem Suspect 1
> 91 instances of "org.apache.solr.search.SolrIndexSearcher", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.981.148.336 (38,26%) bytes. 
> Biggest instances:
>         • 

[jira] [Commented] (SOLR-12743) Memory leak introduced in Solr 7.3.0

2019-02-04 Thread Markus Jelsma (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759790#comment-16759790
 ] 

Markus Jelsma commented on SOLR-12743:
--

Hello all,

Because i can only reproduce it on production, i only have a limited number of 
tries per day, it takes over an hour to test a minor change and more when i 
need to revert. Here are some new notes:

* it doesn't "appear" to be caused by the metrics part, i took out everything 
inside initializeMetrics(), but the leak persisted;
* i swapped FastLRU for LFU cache, otherwise same settings, the node ran OOM 
within minutes even before the commit got issued;
* no idea what happened, but because Solr can run OOM for no clear reason, 
restarted and tried again, this time the otherwise leaking reference is 
collected as it should!

So i finally see a "stable" 7.6 with LFUCache instead of FastLRUCache. To be 
clear, FastLRU does work without leaking, but only with a zero autoWarmCount.

I have no idea what is going on with the warming, the warming code is almost 
identical and i can't see how a SolrIndexSearcher instance would leak with 
FastLRU, but not with LFU. The CacheRegenerator is not leaking the reference, 
nor the calling code in SolrCore seems to be the problem.

I'll keep this single node on 7.6 for now and keep an eye on it.

Thanks!


> Memory leak introduced in Solr 7.3.0
> 
>
> Key: SOLR-12743
> URL: https://issues.apache.org/jira/browse/SOLR-12743
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3, 7.3.1, 7.4
>Reporter: Tomás Fernández Löbbe
>Priority: Critical
> Attachments: SOLR-12743.patch
>
>
> Reported initially by [~markus17]([1], [2]), but other users have had the 
> same issue [3]. Some of the key parts:
> {noformat}
> Some facts:
> * problem started after upgrading from 7.2.1 to 7.3.0;
> * it occurs only in our main text search collection, all other collections 
> are unaffected;
> * despite what i said earlier, it is so far unreproducible outside 
> production, even when mimicking production as good as we can;
> * SortedIntDocSet instances and ConcurrentLRUCache$CacheEntry instances are 
> both leaked on commit;
> * filterCache is enabled using FastLRUCache;
> * filter queries are simple field:value using strings, and three filter query 
> for time range using [NOW/DAY TO NOW+1DAY/DAY] syntax for 'today', 'last 
> week' and 'last month', but rarely used;
> * reloading the core manually frees OldGen;
> * custom URP's don't cause the problem, disabling them doesn't solve it;
> * the collection uses custom extensions for QueryComponent and 
> QueryElevationComponent, ExtendedDismaxQParser and MoreLikeThisQParser, a 
> whole bunch of TokenFilters, and several DocTransformers and due it being 
> only reproducible on production, i really cannot switch these back to 
> Solr/Lucene versions;
> * useFilterForSortedQuery is/was not defined in schema so it was default 
> (true?), SOLR-11769 could be the culprit, i disabled it just now only for the 
> node running 7.4.0, rest of collection runs 7.2.1;
> {noformat}
> {noformat}
> You were right, it was leaking exactly one SolrIndexSearcher instance on each 
> commit. 
> {noformat}
> And from Björn Häuser ([3]):
> {noformat}
> Problem Suspect 1
> 91 instances of "org.apache.solr.search.SolrIndexSearcher", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.981.148.336 (38,26%) bytes. 
> Biggest instances:
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6ffd47ea8 - 70.087.272 
> (1,35%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x79ea9c040 - 65.678.264 
> (1,27%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6855ad680 - 63.050.600 
> (1,22%) bytes. 
> Problem Suspect 2
> 223 instances of "org.apache.solr.util.ConcurrentLRUCache", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.373.110.208 (26,52%) bytes. 
> {noformat}
> More details in the email threads.
> [1] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201804.mbox/%3Czarafa.5ae201c6.2f85.218a781d795b07b1%40mail1.ams.nl.openindex.io%3E]
>  [2] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201806.mbox/%3Czarafa.5b351537.7b8c.647ddc93059f68eb%40mail1.ams.nl.openindex.io%3E]
>  [3] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3c7b5e78c6-8cf6-42ee-8d28-872230ded...@gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13215) Upgrade dropwizard metrics to 4.0.5

2019-02-04 Thread Henrik (JIRA)
Henrik created SOLR-13215:
-

 Summary: Upgrade dropwizard metrics to 4.0.5
 Key: SOLR-13215
 URL: https://issues.apache.org/jira/browse/SOLR-13215
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Affects Versions: 8.x
Reporter: Henrik


This removes the ganglia reporter which is now missing from the metrics library.

See [https://github.com/dropwizard/metrics/issues/1319]

 

Pull request in: [https://github.com/apache/lucene-solr/pull/561]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] henrik242 opened a new pull request #561: Upgrade dropwizard metrics to 4.0.5.

2019-02-04 Thread GitBox
henrik242 opened a new pull request #561: Upgrade dropwizard metrics to 4.0.5.
URL: https://github.com/apache/lucene-solr/pull/561
 
 
   This removes the ganglia reporter which is now missing from the metrics 
library.
   
   See https://github.com/dropwizard/metrics/issues/1319


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6741) IPv6 Field Type

2019-02-04 Thread Dale Richardson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759743#comment-16759743
 ] 

Dale Richardson commented on SOLR-6741:
---

Could somebody please assign this to me  - I do not appear to have the relevant 
access to do so.

> IPv6 Field Type
> ---
>
> Key: SOLR-6741
> URL: https://issues.apache.org/jira/browse/SOLR-6741
> Project: Solr
>  Issue Type: Improvement
>Reporter: Lloyd Ramey
>Priority: Major
> Attachments: SOLR-6741.patch
>
>
> It would be nice if Solr had a field type which could be used to index IPv6 
> data and supported efficient range queries. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] iverase commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
iverase commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253401456
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
+  private final int bytesPerDim;
+  // number of bytes to be sorted: bytesPerDim + Integer.BYTES
+  private final int bytesSorted;
+  //data dimensions size
+  private final int packedBytesLength;
+  //flag to when we are moving to sort on heap
+  private final int maxPointsSortedOffHeap;
+  //reusable buffer
+  private final byte[] offlineBuffer;
+  //holder for partition points
+  private final int[] partitionBucket;
+  //holder for partition bytes
+  private final byte[] partitionBytes;
+  //re-usable on-heap selector
+  private final HeapSelector heapSelector;
+  // scratch object to move bytes around
+  private final BytesRef bytesRef1 = new BytesRef();
+  // scratch object to move bytes around
+  private final BytesRef bytesRef2 = new BytesRef();
+  //Directory to create new Offline writer
+  private final Directory tempDir;
+  // prefix for temp files
+  private final String tempFileNamePrefix;
+
+
+
+  /**
+   * Sole constructor.
+   */
+  public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortedOffHeap, Directory tempDir, String tempFileNamePrefix) {
+this.bytesPerDim = bytesPerDim;
+this.packedBytesLength = numDim * bytesPerDim;
+this.bytesSorted = bytesPerDim + Integer.BYTES;
+this.maxPointsSortedOffHeap = maxPointsSortedOffHeap;
+this.offlineBuffer = new byte[maxPointsSortedOffHeap * (packedBytesLength 
+ Integer.BYTES)];
+this.partitionBucket = new int[bytesSorted];
+this.partitionBytes =  new byte[bytesSorted];
+this.histogram = new long[bytesSorted][HISTOGRAM_SIZE];
+this.bytesRef1.length = numDim * bytesPerDim;
+this.heapSelector = new HeapSelector(numDim, bytesPerDim);
+this.tempDir = tempDir;
+this.tempFileNamePrefix = tempFileNamePrefix;
+  }
+
+  /**
+   * Method to partition the input data. It returns the value of the dimension 
where
+   * the split happens.
+   */
+  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+checkArgs(from, to, partitionPoint);
+
+//If we are on heap then we just select on heap
+if (points instanceof HeapPointWriter) {
+  return heapSelect((HeapPointWriter) points, left, right, dim, 
Math.toIntExact(from), Math.toIntExact(to),  Math.toIntExact(partitionPoint), 
0);
+}
+
+//reset histogram
+for (int i = 0; i < bytesSorted; i++) {
+  Arrays.fill(histogram[i], 0);
+}
+OfflinePointWriter offlinePointWriter = (OfflinePointWriter) points;
+
+//find common prefix, it does already set histogram values if needed
+int commonPrefix = findCommonPrefix(offlinePointWriter, from, to, dim);
+
+//if all equals we just partition the data
+if (commonPrefix ==  bytesSorted) {
+  return partition(offlinePointWriter, left, right, from, to, 
partitionPoint, dim, null, commonPrefix - 1, partitionPoint);
+}
+//let's rock'n'roll
+return buildHistogramAndPartition(offlinePointWriter, null, left, right, 
from, to, partitionPoint, 0, commonPrefix, dim,0, 0);
+  }
+
+  void checkArgs(long from, long to, long middle) {
+if (middle < from) {
+  throw new IllegalArgumentException("middle must be >= from");
+}
+if (middle >= to) {
+  throw new 

[GitHub] iverase commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
iverase commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253400942
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
+  private final int bytesPerDim;
+  // number of bytes to be sorted: bytesPerDim + Integer.BYTES
+  private final int bytesSorted;
+  //data dimensions size
+  private final int packedBytesLength;
+  //flag to when we are moving to sort on heap
+  private final int maxPointsSortedOffHeap;
+  //reusable buffer
+  private final byte[] offlineBuffer;
+  //holder for partition points
+  private final int[] partitionBucket;
+  //holder for partition bytes
+  private final byte[] partitionBytes;
+  //re-usable on-heap selector
+  private final HeapSelector heapSelector;
+  // scratch object to move bytes around
+  private final BytesRef bytesRef1 = new BytesRef();
+  // scratch object to move bytes around
+  private final BytesRef bytesRef2 = new BytesRef();
+  //Directory to create new Offline writer
+  private final Directory tempDir;
+  // prefix for temp files
+  private final String tempFileNamePrefix;
+
+
+
+  /**
+   * Sole constructor.
+   */
+  public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortedOffHeap, Directory tempDir, String tempFileNamePrefix) {
+this.bytesPerDim = bytesPerDim;
+this.packedBytesLength = numDim * bytesPerDim;
+this.bytesSorted = bytesPerDim + Integer.BYTES;
+this.maxPointsSortedOffHeap = maxPointsSortedOffHeap;
+this.offlineBuffer = new byte[maxPointsSortedOffHeap * (packedBytesLength 
+ Integer.BYTES)];
+this.partitionBucket = new int[bytesSorted];
+this.partitionBytes =  new byte[bytesSorted];
+this.histogram = new long[bytesSorted][HISTOGRAM_SIZE];
+this.bytesRef1.length = numDim * bytesPerDim;
+this.heapSelector = new HeapSelector(numDim, bytesPerDim);
+this.tempDir = tempDir;
+this.tempFileNamePrefix = tempFileNamePrefix;
+  }
+
+  /**
+   * Method to partition the input data. It returns the value of the dimension 
where
+   * the split happens.
+   */
+  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+checkArgs(from, to, partitionPoint);
+
+//If we are on heap then we just select on heap
+if (points instanceof HeapPointWriter) {
+  return heapSelect((HeapPointWriter) points, left, right, dim, 
Math.toIntExact(from), Math.toIntExact(to),  Math.toIntExact(partitionPoint), 
0);
+}
+
+//reset histogram
+for (int i = 0; i < bytesSorted; i++) {
+  Arrays.fill(histogram[i], 0);
+}
+OfflinePointWriter offlinePointWriter = (OfflinePointWriter) points;
+
+//find common prefix, it does already set histogram values if needed
+int commonPrefix = findCommonPrefix(offlinePointWriter, from, to, dim);
+
+//if all equals we just partition the data
+if (commonPrefix ==  bytesSorted) {
+  return partition(offlinePointWriter, left, right, from, to, 
partitionPoint, dim, null, commonPrefix - 1, partitionPoint);
+}
+//let's rock'n'roll
+return buildHistogramAndPartition(offlinePointWriter, null, left, right, 
from, to, partitionPoint, 0, commonPrefix, dim,0, 0);
+  }
+
+  void checkArgs(long from, long to, long middle) {
+if (middle < from) {
+  throw new IllegalArgumentException("middle must be >= from");
+}
+if (middle >= to) {
+  throw new 

[GitHub] iverase commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
iverase commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253400460
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/OfflinePointReader.java
 ##
 @@ -74,55 +69,68 @@ public OfflinePointReader(Directory tempDir, String 
tempFileName, int packedByte
   // at another level of the BKDWriter recursion
   in = tempDir.openInput(tempFileName, IOContext.READONCE);
 }
+
 name = tempFileName;
 
 long seekFP = start * bytesPerDoc;
 in.seek(seekFP);
 countLeft = length;
-packedValue = new byte[packedBytesLength];
-this.longOrds = longOrds;
+if (reusableBuffer != null) {
 
 Review comment:
   I am actually thinking if we should make those constructors from the readers 
protected. They should be always be constructed from the writers.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13214) non ok status: 414, message:Request-URI Too Long

2019-02-04 Thread Tushar Choudhary (JIRA)
Tushar Choudhary created SOLR-13214:
---

 Summary: non ok status: 414, message:Request-URI Too Long
 Key: SOLR-13214
 URL: https://issues.apache.org/jira/browse/SOLR-13214
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: clients - java
Reporter: Tushar Choudhary


Getting error from solr 

org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at "URL" :non ok status: 414, message:Request-URI Too Long

Can anyone let me know why i am getting this exception and possible solution to 
overcome this problem.we are using solrCloud and zookeeper.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] iverase commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
iverase commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253399133
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/util/bkd/PointWriter.java
 ##
 @@ -19,24 +19,30 @@
 
 import java.io.Closeable;
 import java.io.IOException;
-import java.util.List;
+
+import org.apache.lucene.util.BytesRef;
 
 /** Appends many points, and then at the end provides a {@link PointReader} to 
iterate
  *  those points.  This abstracts away whether we write to disk, or use simple 
arrays
  *  in heap.
  *
- *  @lucene.internal */
-public interface PointWriter extends Closeable {
-  /** Add a new point */
-  void append(byte[] packedValue, long ord, int docID) throws IOException;
+ *  @lucene.internal
+ *  */
+public interface PointWriter extends Closeable {
+  /** Add a new point from byte array*/
+  void append(byte[] packedValue, int docID) throws IOException;
 
 Review comment:
   This method is used when we spill offline the incoming points on the BKD 
writer. We can wrap there the incoming byte[] into a BytesRef


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13213) Search Components cannot modify "shards" parameter

2019-02-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759704#comment-16759704
 ] 

Jan Høydahl commented on SOLR-13213:


Are you aware of any other potential side effects of running {{prepare}} phase 
before initialising ShardHandler? If not, I'm going to run some more tests and 
commit to master on Wednesday. This change will also allow search components to 
modify shards.tolerant, {{_route_}}, {{shards.qt}}, etc in prepare stage.

> Search Components cannot modify "shards" parameter
> --
>
> Key: SOLR-13213
> URL: https://issues.apache.org/jira/browse/SOLR-13213
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When creating a custom search component for a customer, I realised that 
> modifying "shards" parameter in {{prepare()}} is not possible since in 
> {{SearchHandler}}, the {{ShardHandler}} is initialised based on "shards" 
> parameter just *before* search components are consulted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253378113
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
+  private final int bytesPerDim;
+  // number of bytes to be sorted: bytesPerDim + Integer.BYTES
+  private final int bytesSorted;
+  //data dimensions size
+  private final int packedBytesLength;
+  //flag to when we are moving to sort on heap
+  private final int maxPointsSortedOffHeap;
+  //reusable buffer
+  private final byte[] offlineBuffer;
+  //holder for partition points
+  private final int[] partitionBucket;
+  //holder for partition bytes
+  private final byte[] partitionBytes;
+  //re-usable on-heap selector
+  private final HeapSelector heapSelector;
+  // scratch object to move bytes around
+  private final BytesRef bytesRef1 = new BytesRef();
+  // scratch object to move bytes around
+  private final BytesRef bytesRef2 = new BytesRef();
+  //Directory to create new Offline writer
+  private final Directory tempDir;
+  // prefix for temp files
+  private final String tempFileNamePrefix;
+
+
+
+  /**
+   * Sole constructor.
+   */
+  public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortedOffHeap, Directory tempDir, String tempFileNamePrefix) {
+this.bytesPerDim = bytesPerDim;
+this.packedBytesLength = numDim * bytesPerDim;
+this.bytesSorted = bytesPerDim + Integer.BYTES;
+this.maxPointsSortedOffHeap = maxPointsSortedOffHeap;
+this.offlineBuffer = new byte[maxPointsSortedOffHeap * (packedBytesLength 
+ Integer.BYTES)];
+this.partitionBucket = new int[bytesSorted];
+this.partitionBytes =  new byte[bytesSorted];
+this.histogram = new long[bytesSorted][HISTOGRAM_SIZE];
+this.bytesRef1.length = numDim * bytesPerDim;
+this.heapSelector = new HeapSelector(numDim, bytesPerDim);
+this.tempDir = tempDir;
+this.tempFileNamePrefix = tempFileNamePrefix;
+  }
+
+  /**
+   * Method to partition the input data. It returns the value of the dimension 
where
+   * the split happens.
+   */
+  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+checkArgs(from, to, partitionPoint);
+
+//If we are on heap then we just select on heap
+if (points instanceof HeapPointWriter) {
+  return heapSelect((HeapPointWriter) points, left, right, dim, 
Math.toIntExact(from), Math.toIntExact(to),  Math.toIntExact(partitionPoint), 
0);
+}
+
+//reset histogram
+for (int i = 0; i < bytesSorted; i++) {
+  Arrays.fill(histogram[i], 0);
+}
+OfflinePointWriter offlinePointWriter = (OfflinePointWriter) points;
+
+//find common prefix, it does already set histogram values if needed
+int commonPrefix = findCommonPrefix(offlinePointWriter, from, to, dim);
+
+//if all equals we just partition the data
+if (commonPrefix ==  bytesSorted) {
+  return partition(offlinePointWriter, left, right, from, to, 
partitionPoint, dim, null, commonPrefix - 1, partitionPoint);
+}
+//let's rock'n'roll
+return buildHistogramAndPartition(offlinePointWriter, null, left, right, 
from, to, partitionPoint, 0, commonPrefix, dim,0, 0);
+  }
+
+  void checkArgs(long from, long to, long middle) {
+if (middle < from) {
+  throw new IllegalArgumentException("middle must be >= from");
+}
+if (middle >= to) {
+  throw new 

[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253378270
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
+  private final int bytesPerDim;
+  // number of bytes to be sorted: bytesPerDim + Integer.BYTES
+  private final int bytesSorted;
+  //data dimensions size
+  private final int packedBytesLength;
+  //flag to when we are moving to sort on heap
+  private final int maxPointsSortedOffHeap;
+  //reusable buffer
+  private final byte[] offlineBuffer;
+  //holder for partition points
+  private final int[] partitionBucket;
+  //holder for partition bytes
+  private final byte[] partitionBytes;
+  //re-usable on-heap selector
+  private final HeapSelector heapSelector;
+  // scratch object to move bytes around
+  private final BytesRef bytesRef1 = new BytesRef();
+  // scratch object to move bytes around
+  private final BytesRef bytesRef2 = new BytesRef();
+  //Directory to create new Offline writer
+  private final Directory tempDir;
+  // prefix for temp files
+  private final String tempFileNamePrefix;
+
+
+
+  /**
+   * Sole constructor.
+   */
+  public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortedOffHeap, Directory tempDir, String tempFileNamePrefix) {
+this.bytesPerDim = bytesPerDim;
+this.packedBytesLength = numDim * bytesPerDim;
+this.bytesSorted = bytesPerDim + Integer.BYTES;
+this.maxPointsSortedOffHeap = maxPointsSortedOffHeap;
+this.offlineBuffer = new byte[maxPointsSortedOffHeap * (packedBytesLength 
+ Integer.BYTES)];
+this.partitionBucket = new int[bytesSorted];
+this.partitionBytes =  new byte[bytesSorted];
+this.histogram = new long[bytesSorted][HISTOGRAM_SIZE];
+this.bytesRef1.length = numDim * bytesPerDim;
+this.heapSelector = new HeapSelector(numDim, bytesPerDim);
+this.tempDir = tempDir;
+this.tempFileNamePrefix = tempFileNamePrefix;
+  }
+
+  /**
+   * Method to partition the input data. It returns the value of the dimension 
where
+   * the split happens.
+   */
+  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+checkArgs(from, to, partitionPoint);
+
+//If we are on heap then we just select on heap
+if (points instanceof HeapPointWriter) {
+  return heapSelect((HeapPointWriter) points, left, right, dim, 
Math.toIntExact(from), Math.toIntExact(to),  Math.toIntExact(partitionPoint), 
0);
+}
+
+//reset histogram
+for (int i = 0; i < bytesSorted; i++) {
+  Arrays.fill(histogram[i], 0);
+}
+OfflinePointWriter offlinePointWriter = (OfflinePointWriter) points;
+
+//find common prefix, it does already set histogram values if needed
+int commonPrefix = findCommonPrefix(offlinePointWriter, from, to, dim);
+
+//if all equals we just partition the data
+if (commonPrefix ==  bytesSorted) {
+  return partition(offlinePointWriter, left, right, from, to, 
partitionPoint, dim, null, commonPrefix - 1, partitionPoint);
+}
+//let's rock'n'roll
+return buildHistogramAndPartition(offlinePointWriter, null, left, right, 
from, to, partitionPoint, 0, commonPrefix, dim,0, 0);
+  }
+
+  void checkArgs(long from, long to, long middle) {
+if (middle < from) {
+  throw new IllegalArgumentException("middle must be >= from");
+}
+if (middle >= to) {
+  throw new 

[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253373809
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
+  private final int bytesPerDim;
+  // number of bytes to be sorted: bytesPerDim + Integer.BYTES
+  private final int bytesSorted;
+  //data dimensions size
+  private final int packedBytesLength;
+  //flag to when we are moving to sort on heap
+  private final int maxPointsSortedOffHeap;
+  //reusable buffer
+  private final byte[] offlineBuffer;
+  //holder for partition points
+  private final int[] partitionBucket;
+  //holder for partition bytes
+  private final byte[] partitionBytes;
+  //re-usable on-heap selector
+  private final HeapSelector heapSelector;
+  // scratch object to move bytes around
+  private final BytesRef bytesRef1 = new BytesRef();
+  // scratch object to move bytes around
+  private final BytesRef bytesRef2 = new BytesRef();
+  //Directory to create new Offline writer
+  private final Directory tempDir;
+  // prefix for temp files
+  private final String tempFileNamePrefix;
+
+
+
+  /**
+   * Sole constructor.
+   */
+  public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortedOffHeap, Directory tempDir, String tempFileNamePrefix) {
+this.bytesPerDim = bytesPerDim;
+this.packedBytesLength = numDim * bytesPerDim;
+this.bytesSorted = bytesPerDim + Integer.BYTES;
+this.maxPointsSortedOffHeap = maxPointsSortedOffHeap;
+this.offlineBuffer = new byte[maxPointsSortedOffHeap * (packedBytesLength 
+ Integer.BYTES)];
 
 Review comment:
   My gut feeling is that we don't need to spend so much memory on this buffer 
for good performance and could instead make it around 8KB all the time 
(non-configurable) so that on-heap selection can use about 2x more memory.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253381960
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
+  private final int bytesPerDim;
+  // number of bytes to be sorted: bytesPerDim + Integer.BYTES
+  private final int bytesSorted;
+  //data dimensions size
+  private final int packedBytesLength;
+  //flag to when we are moving to sort on heap
+  private final int maxPointsSortedOffHeap;
 
 Review comment:
   s/Off/On/ ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253380758
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/util/bkd/PointWriter.java
 ##
 @@ -19,24 +19,30 @@
 
 import java.io.Closeable;
 import java.io.IOException;
-import java.util.List;
+
+import org.apache.lucene.util.BytesRef;
 
 /** Appends many points, and then at the end provides a {@link PointReader} to 
iterate
  *  those points.  This abstracts away whether we write to disk, or use simple 
arrays
  *  in heap.
  *
- *  @lucene.internal */
-public interface PointWriter extends Closeable {
-  /** Add a new point */
-  void append(byte[] packedValue, long ord, int docID) throws IOException;
+ *  @lucene.internal
+ *  */
+public interface PointWriter extends Closeable {
+  /** Add a new point from byte array*/
+  void append(byte[] packedValue, int docID) throws IOException;
 
 Review comment:
   do we still need this one, ie. could callers always call the method that 
takes a bytesref?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253376398
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
+  private final int bytesPerDim;
+  // number of bytes to be sorted: bytesPerDim + Integer.BYTES
+  private final int bytesSorted;
+  //data dimensions size
+  private final int packedBytesLength;
+  //flag to when we are moving to sort on heap
+  private final int maxPointsSortedOffHeap;
+  //reusable buffer
+  private final byte[] offlineBuffer;
+  //holder for partition points
+  private final int[] partitionBucket;
+  //holder for partition bytes
+  private final byte[] partitionBytes;
+  //re-usable on-heap selector
+  private final HeapSelector heapSelector;
+  // scratch object to move bytes around
+  private final BytesRef bytesRef1 = new BytesRef();
+  // scratch object to move bytes around
+  private final BytesRef bytesRef2 = new BytesRef();
+  //Directory to create new Offline writer
+  private final Directory tempDir;
+  // prefix for temp files
+  private final String tempFileNamePrefix;
+
+
+
+  /**
+   * Sole constructor.
+   */
+  public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortedOffHeap, Directory tempDir, String tempFileNamePrefix) {
+this.bytesPerDim = bytesPerDim;
+this.packedBytesLength = numDim * bytesPerDim;
+this.bytesSorted = bytesPerDim + Integer.BYTES;
+this.maxPointsSortedOffHeap = maxPointsSortedOffHeap;
+this.offlineBuffer = new byte[maxPointsSortedOffHeap * (packedBytesLength 
+ Integer.BYTES)];
+this.partitionBucket = new int[bytesSorted];
+this.partitionBytes =  new byte[bytesSorted];
+this.histogram = new long[bytesSorted][HISTOGRAM_SIZE];
+this.bytesRef1.length = numDim * bytesPerDim;
+this.heapSelector = new HeapSelector(numDim, bytesPerDim);
+this.tempDir = tempDir;
+this.tempFileNamePrefix = tempFileNamePrefix;
+  }
+
+  /**
+   * Method to partition the input data. It returns the value of the dimension 
where
+   * the split happens.
+   */
+  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+checkArgs(from, to, partitionPoint);
+
+//If we are on heap then we just select on heap
+if (points instanceof HeapPointWriter) {
+  return heapSelect((HeapPointWriter) points, left, right, dim, 
Math.toIntExact(from), Math.toIntExact(to),  Math.toIntExact(partitionPoint), 
0);
+}
+
+//reset histogram
+for (int i = 0; i < bytesSorted; i++) {
+  Arrays.fill(histogram[i], 0);
+}
+OfflinePointWriter offlinePointWriter = (OfflinePointWriter) points;
+
+//find common prefix, it does already set histogram values if needed
+int commonPrefix = findCommonPrefix(offlinePointWriter, from, to, dim);
+
+//if all equals we just partition the data
+if (commonPrefix ==  bytesSorted) {
+  return partition(offlinePointWriter, left, right, from, to, 
partitionPoint, dim, null, commonPrefix - 1, partitionPoint);
+}
+//let's rock'n'roll
+return buildHistogramAndPartition(offlinePointWriter, null, left, right, 
from, to, partitionPoint, 0, commonPrefix, dim,0, 0);
+  }
+
+  void checkArgs(long from, long to, long middle) {
+if (middle < from) {
+  throw new IllegalArgumentException("middle must be >= from");
+}
+if (middle >= to) {
+  throw new 

[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253380207
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/util/bkd/PointWriter.java
 ##
 @@ -19,24 +19,30 @@
 
 import java.io.Closeable;
 import java.io.IOException;
-import java.util.List;
+
+import org.apache.lucene.util.BytesRef;
 
 /** Appends many points, and then at the end provides a {@link PointReader} to 
iterate
  *  those points.  This abstracts away whether we write to disk, or use simple 
arrays
  *  in heap.
  *
- *  @lucene.internal */
-public interface PointWriter extends Closeable {
-  /** Add a new point */
-  void append(byte[] packedValue, long ord, int docID) throws IOException;
+ *  @lucene.internal
+ *  */
+public interface PointWriter extends Closeable {
 
 Review comment:
   I don't think we need generics here, making the getReader signature return 
an OfflinePointReader in OfflinePointWriter should be enough?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253373968
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/OfflinePointReader.java
 ##
 @@ -74,55 +69,68 @@ public OfflinePointReader(Directory tempDir, String 
tempFileName, int packedByte
   // at another level of the BKDWriter recursion
   in = tempDir.openInput(tempFileName, IOContext.READONCE);
 }
+
 name = tempFileName;
 
 long seekFP = start * bytesPerDoc;
 in.seek(seekFP);
 countLeft = length;
-packedValue = new byte[packedBytesLength];
-this.longOrds = longOrds;
+if (reusableBuffer != null) {
 
 Review comment:
   then if we do that maybe we can remove the maxPointInHeap ctor argument and 
compute it from this buffer size?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253376318
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
+  private final int bytesPerDim;
+  // number of bytes to be sorted: bytesPerDim + Integer.BYTES
+  private final int bytesSorted;
+  //data dimensions size
+  private final int packedBytesLength;
+  //flag to when we are moving to sort on heap
+  private final int maxPointsSortedOffHeap;
+  //reusable buffer
+  private final byte[] offlineBuffer;
+  //holder for partition points
+  private final int[] partitionBucket;
+  //holder for partition bytes
+  private final byte[] partitionBytes;
+  //re-usable on-heap selector
+  private final HeapSelector heapSelector;
+  // scratch object to move bytes around
+  private final BytesRef bytesRef1 = new BytesRef();
+  // scratch object to move bytes around
+  private final BytesRef bytesRef2 = new BytesRef();
+  //Directory to create new Offline writer
+  private final Directory tempDir;
+  // prefix for temp files
+  private final String tempFileNamePrefix;
+
+
+
+  /**
+   * Sole constructor.
+   */
+  public BKDRadixSelector(int numDim, int bytesPerDim, int 
maxPointsSortedOffHeap, Directory tempDir, String tempFileNamePrefix) {
+this.bytesPerDim = bytesPerDim;
+this.packedBytesLength = numDim * bytesPerDim;
+this.bytesSorted = bytesPerDim + Integer.BYTES;
+this.maxPointsSortedOffHeap = maxPointsSortedOffHeap;
+this.offlineBuffer = new byte[maxPointsSortedOffHeap * (packedBytesLength 
+ Integer.BYTES)];
+this.partitionBucket = new int[bytesSorted];
+this.partitionBytes =  new byte[bytesSorted];
+this.histogram = new long[bytesSorted][HISTOGRAM_SIZE];
+this.bytesRef1.length = numDim * bytesPerDim;
+this.heapSelector = new HeapSelector(numDim, bytesPerDim);
+this.tempDir = tempDir;
+this.tempFileNamePrefix = tempFileNamePrefix;
+  }
+
+  /**
+   * Method to partition the input data. It returns the value of the dimension 
where
+   * the split happens.
+   */
+  public byte[] select(PointWriter points, PointWriter left, PointWriter 
right, long from, long to, long partitionPoint, int dim) throws IOException {
+checkArgs(from, to, partitionPoint);
+
+//If we are on heap then we just select on heap
+if (points instanceof HeapPointWriter) {
+  return heapSelect((HeapPointWriter) points, left, right, dim, 
Math.toIntExact(from), Math.toIntExact(to),  Math.toIntExact(partitionPoint), 
0);
+}
+
+//reset histogram
+for (int i = 0; i < bytesSorted; i++) {
+  Arrays.fill(histogram[i], 0);
+}
+OfflinePointWriter offlinePointWriter = (OfflinePointWriter) points;
+
+//find common prefix, it does already set histogram values if needed
+int commonPrefix = findCommonPrefix(offlinePointWriter, from, to, dim);
+
+//if all equals we just partition the data
+if (commonPrefix ==  bytesSorted) {
+  return partition(offlinePointWriter, left, right, from, to, 
partitionPoint, dim, null, commonPrefix - 1, partitionPoint);
+}
+//let's rock'n'roll
+return buildHistogramAndPartition(offlinePointWriter, null, left, right, 
from, to, partitionPoint, 0, commonPrefix, dim,0, 0);
+  }
+
+  void checkArgs(long from, long to, long middle) {
+if (middle < from) {
+  throw new IllegalArgumentException("middle must be >= from");
+}
+if (middle >= to) {
+  throw new 

[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253373228
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/OfflinePointReader.java
 ##
 @@ -74,55 +69,68 @@ public OfflinePointReader(Directory tempDir, String 
tempFileName, int packedByte
   // at another level of the BKDWriter recursion
   in = tempDir.openInput(tempFileName, IOContext.READONCE);
 }
+
 name = tempFileName;
 
 long seekFP = start * bytesPerDoc;
 in.seek(seekFP);
 countLeft = length;
-packedValue = new byte[packedBytesLength];
-this.longOrds = longOrds;
+if (reusableBuffer != null) {
+  assert reusableBuffer.length >= this.maxPointOnHeap * bytesPerDoc;
+  this.onHeapBuffer = reusableBuffer;
+} else {
+  this.onHeapBuffer = new byte[this.maxPointOnHeap * bytesPerDoc];
+}
   }
 
   @Override
   public boolean next() throws IOException {
-if (countLeft >= 0) {
-  if (countLeft == 0) {
-return false;
+if (this.pointsInBuffer == 0) {
+  if (countLeft >= 0) {
+if (countLeft == 0) {
+  return false;
+}
   }
-  countLeft--;
-}
-try {
-  in.readBytes(packedValue, 0, packedValue.length);
-} catch (EOFException eofe) {
-  assert countLeft == -1;
-  return false;
-}
-docID = in.readInt();
-if (singleValuePerDoc == false) {
-  if (longOrds) {
-ord = in.readLong();
-  } else {
-ord = in.readInt();
+  try {
+if (countLeft > maxPointOnHeap) {
+  in.readBytes(onHeapBuffer, 0, maxPointOnHeap * bytesPerDoc);
+  pointsInBuffer = maxPointOnHeap - 1;
+  countLeft -= maxPointOnHeap;
+} else {
+  in.readBytes(onHeapBuffer, 0, (int) countLeft * bytesPerDoc);
+  pointsInBuffer = Math.toIntExact(countLeft - 1);
+  countLeft = 0;
+}
+this.offset = 0;
+  } catch (EOFException eofe) {
+assert countLeft == -1;
+return false;
   }
 } else {
-  ord = docID;
+  this.pointsInBuffer--;
+  this.offset += bytesPerDoc;
 }
 return true;
   }
 
   @Override
-  public byte[] packedValue() {
-return packedValue;
+  public void packedValue(BytesRef bytesRef) {
+bytesRef.bytes = onHeapBuffer;
+bytesRef.offset = offset;
+bytesRef.length = packedValueLength;
   }
 
-  @Override
-  public long ord() {
-return ord;
+  protected void docValue(BytesRef bytesRef) {
 
 Review comment:
   Based on the naming I thought it would only be the 4 bytes that represent 
the docID, maybe give it a more explicit name?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253372453
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/OfflinePointReader.java
 ##
 @@ -74,55 +69,68 @@ public OfflinePointReader(Directory tempDir, String 
tempFileName, int packedByte
   // at another level of the BKDWriter recursion
   in = tempDir.openInput(tempFileName, IOContext.READONCE);
 }
+
 name = tempFileName;
 
 long seekFP = start * bytesPerDoc;
 in.seek(seekFP);
 countLeft = length;
-packedValue = new byte[packedBytesLength];
-this.longOrds = longOrds;
+if (reusableBuffer != null) {
 
 Review comment:
   could we require a non-null buffer instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix partitioning when merging dimensional points

2019-02-04 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
partitioning when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r253369643
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/util/bkd/BKDRadixSelector.java
 ##
 @@ -0,0 +1,433 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.util.bkd;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.FutureArrays;
+import org.apache.lucene.util.IntroSelector;
+
+/**
+ *
+ * Offline Radix selector for BKD tree.
+ *
+ *  @lucene.internal
+ * */
+public final class BKDRadixSelector {
+  //size of the histogram
+  private static final int HISTOGRAM_SIZE = 256;
+  // we store one histogram per recursion level
+  private final long[][] histogram;
+  //bytes we are sorting
 
 Review comment:
   this description better applies to bytesSorted than to bytesPerDim?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8679) Test failure in LatLonShape

2019-02-04 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8679:
-
Fix Version/s: 8.0

> Test failure in LatLonShape
> ---
>
> Key: LUCENE-8679
> URL: https://issues.apache.org/jira/browse/LUCENE-8679
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Fix For: 8.0, 7.7, master (9.0), 8.x
>
> Attachments: LUCENE-8679.patch, LUCENE-8679.patch
>
>
> Error and reproducible seed:
>  
> {code:java}
> [junit4] Suite: org.apache.lucene.document.TestLatLonShape
>    [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLatLonShape 
> -Dtests.method=testRandomPolygonEncoding -Dtests.seed=E92F1FD44199EFBE 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=no-NO 
> -Dtests.timezone=America/North_Dakota/Center -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>    [junit4] FAILURE 0.04s J2 | TestLatLonShape.testRandomPolygonEncoding <<<
>    [junit4]    > Throwable #1: java.lang.AssertionError
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
>    [junit4]    >        at java.lang.Thread.run(Thread.java:748)
>    [junit4]   2> NOTE: leaving temporary files on disk at: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/sandbox/test/J2/temp/lucene.document.TestLatLonShape_E92F1FD44199EFBE-001
>    [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): {}, 
> docValues:{}, maxPointsInLeafNode=1441, maxMBSortInHeap=7.577899936070286, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@419db2df),
>  locale=no-NO, timezone=America/North_Dakota/Center
>    [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 
> 1.8.0_191 (64-bit)/cpus=4,threads=1,free=168572480,total=309854208
>    [junit4]   2> NOTE: All tests run in this JVM: [TestIntervals, 
> TestLatLonLineShapeQueries, TestLatLonShape]
>    [junit4] Completed [10/27 (1!)] on J2 in 14.55s, 25 tests, 1 failure, 1 
> skipped <<< FAILURES!{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8679) Test failure in LatLonShape

2019-02-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759695#comment-16759695
 ] 

ASF subversion and git services commented on LUCENE-8679:
-

Commit 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 in lucene-solr's branch 
refs/heads/branch_7_7 from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8c831da ]

LUCENE-8679: return WITHIN in EdgeTree#relateTriangle only when polygon and 
triangle share one edge


> Test failure in LatLonShape
> ---
>
> Key: LUCENE-8679
> URL: https://issues.apache.org/jira/browse/LUCENE-8679
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Fix For: 7.7, master (9.0), 8.x
>
> Attachments: LUCENE-8679.patch, LUCENE-8679.patch
>
>
> Error and reproducible seed:
>  
> {code:java}
> [junit4] Suite: org.apache.lucene.document.TestLatLonShape
>    [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLatLonShape 
> -Dtests.method=testRandomPolygonEncoding -Dtests.seed=E92F1FD44199EFBE 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=no-NO 
> -Dtests.timezone=America/North_Dakota/Center -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>    [junit4] FAILURE 0.04s J2 | TestLatLonShape.testRandomPolygonEncoding <<<
>    [junit4]    > Throwable #1: java.lang.AssertionError
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
>    [junit4]    >        at java.lang.Thread.run(Thread.java:748)
>    [junit4]   2> NOTE: leaving temporary files on disk at: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/sandbox/test/J2/temp/lucene.document.TestLatLonShape_E92F1FD44199EFBE-001
>    [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): {}, 
> docValues:{}, maxPointsInLeafNode=1441, maxMBSortInHeap=7.577899936070286, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@419db2df),
>  locale=no-NO, timezone=America/North_Dakota/Center
>    [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 
> 1.8.0_191 (64-bit)/cpus=4,threads=1,free=168572480,total=309854208
>    [junit4]   2> NOTE: All tests run in this JVM: [TestIntervals, 
> TestLatLonLineShapeQueries, TestLatLonShape]
>    [junit4] Completed [10/27 (1!)] on J2 in 14.55s, 25 tests, 1 failure, 1 
> skipped <<< FAILURES!{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8679) Test failure in LatLonShape

2019-02-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759693#comment-16759693
 ] 

ASF subversion and git services commented on LUCENE-8679:
-

Commit f3c585ba28e0fef902a6be1742660ee0ebeca35e in lucene-solr's branch 
refs/heads/branch_8_0 from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f3c585b ]

LUCENE-8679: return WITHIN in EdgeTree#relateTriangle only when polygon and 
triangle share one edge


> Test failure in LatLonShape
> ---
>
> Key: LUCENE-8679
> URL: https://issues.apache.org/jira/browse/LUCENE-8679
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Fix For: 7.7, master (9.0), 8.x
>
> Attachments: LUCENE-8679.patch, LUCENE-8679.patch
>
>
> Error and reproducible seed:
>  
> {code:java}
> [junit4] Suite: org.apache.lucene.document.TestLatLonShape
>    [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLatLonShape 
> -Dtests.method=testRandomPolygonEncoding -Dtests.seed=E92F1FD44199EFBE 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=no-NO 
> -Dtests.timezone=America/North_Dakota/Center -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>    [junit4] FAILURE 0.04s J2 | TestLatLonShape.testRandomPolygonEncoding <<<
>    [junit4]    > Throwable #1: java.lang.AssertionError
>    [junit4]    >        at 
> __randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
>    [junit4]    >        at 
> org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
>    [junit4]    >        at java.lang.Thread.run(Thread.java:748)
>    [junit4]   2> NOTE: leaving temporary files on disk at: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/sandbox/test/J2/temp/lucene.document.TestLatLonShape_E92F1FD44199EFBE-001
>    [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): {}, 
> docValues:{}, maxPointsInLeafNode=1441, maxMBSortInHeap=7.577899936070286, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@419db2df),
>  locale=no-NO, timezone=America/North_Dakota/Center
>    [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 
> 1.8.0_191 (64-bit)/cpus=4,threads=1,free=168572480,total=309854208
>    [junit4]   2> NOTE: All tests run in this JVM: [TestIntervals, 
> TestLatLonLineShapeQueries, TestLatLonShape]
>    [junit4] Completed [10/27 (1!)] on J2 in 14.55s, 25 tests, 1 failure, 1 
> skipped <<< FAILURES!{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org