[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 4 - Unstable!

2019-01-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/4/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudRecovery2.test

Error Message:
Error from server at http://127.0.0.1:44362/solr: 3 Async exceptions during 
distributed update:  java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:44362/solr: 3 Async exceptions during 
distributed update: 
java.net.ConnectException: Connection refused
java.net.ConnectException: Connection refused
java.net.ConnectException: Connection refused
at 
__randomizedtesting.SeedInfo.seed([FD138BE537932327:7547B43F996F4EDF]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:650)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:256)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:237)
at 
org.apache.solr.cloud.TestCloudRecovery2.test(TestCloudRecovery2.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
   

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 430 - Still Unstable

2019-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/430/

2 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:39788/kiilu/forceleader_test_collection

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:39788/kiilu/forceleader_test_collection
at 
__randomizedtesting.SeedInfo.seed([D7C3616BE0F39CD1:315455ABD97165B0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:504)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:479)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1063)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1035)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Comment Edited] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-11 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740855#comment-16740855
 ] 

Ankit Jain edited comment on LUCENE-8635 at 1/12/19 4:58 AM:
-

The excel sheet is big, so pasting here might not help? You have good point 
about moving FSTs off-heap in the default codec as we can always preload mmap 
file during index open as demonstrated 
[here|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

 

I ran the default lucene test suite and couple of tests seem to fail. Though, 
they don't seem to have anything to do with my change:

 

   [junit4] Tests with failures [seed: 1D3ADDF6AE377902]:

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

   [junit4] Execution time total: 1 hour 12 minutes 40 seconds

   [junit4] Tests summary: 833 suites (7 ignored), 4024 tests, 2 failures, 286 
ignored (153 assumptions)

 

UPDATE: The tests passed after retrying individually. 

 


was (Author: akjain):
The excel sheet is big, so pasting here might not help? You have good point 
about moving FSTs off-heap in the default codec as we can always preload mmap 
file during index open as demonstrated 
[here|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

 

I ran the default lucene test suite and couple of tests seem to fail. Though, 
they don't seem to have anything to do with my change:

 

   [junit4] Tests with failures [seed: 1D3ADDF6AE377902]:

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

   [junit4]

   [junit4]

   [junit4] JVM J0:     1.40 ..  4359.18 =  4357.78s

   [junit4] JVM J1:     1.40 ..  4359.35 =  4357.95s

   [junit4] JVM J2:     1.40 ..  4359.30 =  4357.90s

   [junit4] Execution time total: 1 hour 12 minutes 40 seconds

   [junit4] Tests summary: 833 suites (7 ignored), 4024 tests, 2 failures, 286 
ignored (153 assumptions)

 

Details for failing tests

 

NOTE: reproduce with: ant test  -Dtestcase=ScheduledTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=1D3ADDF6AE377902 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=mr-IN -Dtests.timezone=America/St_Lucia 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

   [junit4] FAILURE 9.03s J2 | ScheduledTriggerTest.testTrigger <<<

   [junit4]    > Throwable #1: java.lang.AssertionError: expected:<3> but 
was:<2>

   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([1D3ADDF6AE377902:7EF1EB7437F80A2F]:0)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:113)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:66)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   [junit4]    >        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   [junit4]    >        at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)

   [junit4]    >        at java.base/java.lang.Thread.run(Thread.java:844)

 

NOTE: reproduce with: ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=1D3ADDF6AE377902 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ha 
-Dtests.timezone=America/Nome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

   [junit4] FAILURE 2.01s J0 | 
ScheduledMaintenanceTriggerTest.testInactiveShardCleanup <<<

at __randomizedtesting.SeedInfo.seed([1D3ADDF6AE377902:161D84CF745E09]:0)

   [junit4]    >        at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup(ScheduledMaintenanceTriggerTest.java:167)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   [junit4]    >        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   [junit4]    >        at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)

   [junit4]    >        at java.base/java.lang.Thread.run(Thread.java:844)

   [junit4]    > Caused by: 

[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-11 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740997#comment-16740997
 ] 

Ankit Jain commented on LUCENE-8635:


Thanks for the tip Erick. I ran the failing tests individually and they passed!

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: offheap.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2255 - Unstable!

2019-01-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2255/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest

Error Message:
Error starting up MiniSolrCloudCluster

Stack Trace:
java.lang.Exception: Error starting up MiniSolrCloudCluster
at __randomizedtesting.SeedInfo.seed([EF8FF10BD562E710]:0)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:619)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:275)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:206)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:198)
at 
org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest.setupCluster(SolrJmxReporterCloudTest.java:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.RuntimeException: Jetty/Solr unresponsive
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:493)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:451)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:434)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:269)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest

Error Message:
10 threads leaked from SUITE scope at 
org.apache.solr.metrics.reporters.SolrJmxReporterCloudTest: 1) 
Thread[id=55416, name=qtp740833930-55416, state=RUNNABLE, 
group=TGRP-SolrJmxReporterCloudTest] at 
sun.nio.ch.DevPollArrayWrapper.poll0(Native Method) at 
sun.nio.ch.DevPollArrayWrapper.poll(DevPollArrayWrapper.java:223) at 
sun.nio.ch.DevPollSelectorImpl.doSelect(DevPollSelectorImpl.java:98) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 

Re: Welcome Nick Knize to the PMC

2019-01-11 Thread Anshum Gupta
Congratulations and welcome, Nick!

On Wed, Jan 9, 2019 at 7:12 AM Adrien Grand  wrote:

> I am pleased to announce that Nick Knize has accepted the PMC's
> invitation to join.
>
> Welcome Nick!
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-- 
Anshum Gupta


[JENKINS] Lucene-Solr-Tests-8.x - Build # 1 - Unstable

2019-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/1/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp

Error Message:
{numFound=98880,start=0,docs=[]} expected:<10> but was:<98880>

Stack Trace:
java.lang.AssertionError: {numFound=98880,start=0,docs=[]} 
expected:<10> but was:<98880>
at 
__randomizedtesting.SeedInfo.seed([C7158A17B77797DB:E64BCCB5BB59497A]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp(TestSimExtremeIndexing.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.client.solrj.TestLBHttp2SolrClient.testTwoServers

Error Message:
Timeout occured while waiting response from server at: 

[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-11 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740932#comment-16740932
 ] 

Erick Erickson commented on LUCENE-8635:


Ankit:

 

The autoscaling tests are have been failing intermittently for a while. If you 
can run those tests independently and have them succeed I wouldn't worry about 
them.

"run those tests independently" in this case is just executing the "reproduce 
with" line, just cut/paste. e.g.

 

ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=1D3ADDF6AE377902 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ha 
-Dtests.timezone=America/Nome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

 

Best,

Erick

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: offheap.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-11 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740855#comment-16740855
 ] 

Ankit Jain edited comment on LUCENE-8635 at 1/12/19 12:08 AM:
--

The excel sheet is big, so pasting here might not help? You have good point 
about moving FSTs off-heap in the default codec as we can always preload mmap 
file during index open as demonstrated 
[here|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

 

I ran the default lucene test suite and couple of tests seem to fail. Though, 
they don't seem to have anything to do with my change:

 

   [junit4] Tests with failures [seed: 1D3ADDF6AE377902]:

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

   [junit4]

   [junit4]

   [junit4] JVM J0:     1.40 ..  4359.18 =  4357.78s

   [junit4] JVM J1:     1.40 ..  4359.35 =  4357.95s

   [junit4] JVM J2:     1.40 ..  4359.30 =  4357.90s

   [junit4] Execution time total: 1 hour 12 minutes 40 seconds

   [junit4] Tests summary: 833 suites (7 ignored), 4024 tests, 2 failures, 286 
ignored (153 assumptions)

 

Details for failing tests

 

NOTE: reproduce with: ant test  -Dtestcase=ScheduledTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=1D3ADDF6AE377902 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=mr-IN -Dtests.timezone=America/St_Lucia 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

   [junit4] FAILURE 9.03s J2 | ScheduledTriggerTest.testTrigger <<<

   [junit4]    > Throwable #1: java.lang.AssertionError: expected:<3> but 
was:<2>

   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([1D3ADDF6AE377902:7EF1EB7437F80A2F]:0)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:113)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:66)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   [junit4]    >        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   [junit4]    >        at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)

   [junit4]    >        at java.base/java.lang.Thread.run(Thread.java:844)

 

NOTE: reproduce with: ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=1D3ADDF6AE377902 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ha 
-Dtests.timezone=America/Nome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

   [junit4] FAILURE 2.01s J0 | 
ScheduledMaintenanceTriggerTest.testInactiveShardCleanup <<<

at __randomizedtesting.SeedInfo.seed([1D3ADDF6AE377902:161D84CF745E09]:0)

   [junit4]    >        at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup(ScheduledMaintenanceTriggerTest.java:167)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   [junit4]    >        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   [junit4]    >        at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)

   [junit4]    >        at java.base/java.lang.Thread.run(Thread.java:844)

   [junit4]    > Caused by: java.util.concurrent.TimeoutException: last state: 
DocCollection(ScheduledMaintenanceTriggerTest_collection1//clusterstate.json/6)={

 


was (Author: akjain):
The excel sheet is pretty big, so not sure if pasting it here is good idea. You 
have good point about moving FSTs off-heap in the default codec as we can 
always preload mmap file during index open as demonstrated 
[here|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

 

 

I ran the test suite and couple of tests seem to fail. Though, they don't seem 
to have anything to do with my change:

 

   [junit4] Tests with failures [seed: 1D3ADDF6AE377902]:

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

   [junit4]

   [junit4]

   [junit4] JVM J0:     1.40 ..  4359.18 =  4357.78s

   [junit4] JVM J1:     1.40 ..  

[jira] [Comment Edited] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-11 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740855#comment-16740855
 ] 

Ankit Jain edited comment on LUCENE-8635 at 1/12/19 12:07 AM:
--

The excel sheet is pretty big, so not sure if pasting it here is good idea. You 
have good point about moving FSTs off-heap in the default codec as we can 
always preload mmap file during index open as demonstrated 
[here|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

 

 

I ran the test suite and couple of tests seem to fail. Though, they don't seem 
to have anything to do with my change:

 

   [junit4] Tests with failures [seed: 1D3ADDF6AE377902]:

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

   [junit4]

   [junit4]

   [junit4] JVM J0:     1.40 ..  4359.18 =  4357.78s

   [junit4] JVM J1:     1.40 ..  4359.35 =  4357.95s

   [junit4] JVM J2:     1.40 ..  4359.30 =  4357.90s

   [junit4] Execution time total: 1 hour 12 minutes 40 seconds

   [junit4] Tests summary: 833 suites (7 ignored), 4024 tests, 2 failures, 286 
ignored (153 assumptions)

 

Details for failing tests

 

NOTE: reproduce with: ant test  -Dtestcase=ScheduledTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=1D3ADDF6AE377902 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=mr-IN -Dtests.timezone=America/St_Lucia 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

   [junit4] FAILURE 9.03s J2 | ScheduledTriggerTest.testTrigger <<<

   [junit4]    > Throwable #1: java.lang.AssertionError: expected:<3> but 
was:<2>

   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([1D3ADDF6AE377902:7EF1EB7437F80A2F]:0)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:113)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:66)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   [junit4]    >        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   [junit4]    >        at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)

   [junit4]    >        at java.base/java.lang.Thread.run(Thread.java:844)

 

NOTE: reproduce with: ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=1D3ADDF6AE377902 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ha 
-Dtests.timezone=America/Nome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

   [junit4] FAILURE 2.01s J0 | 
ScheduledMaintenanceTriggerTest.testInactiveShardCleanup <<<

at __randomizedtesting.SeedInfo.seed([1D3ADDF6AE377902:161D84CF745E09]:0)

   [junit4]    >        at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup(ScheduledMaintenanceTriggerTest.java:167)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   [junit4]    >        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   [junit4]    >        at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)

   [junit4]    >        at java.base/java.lang.Thread.run(Thread.java:844)

   [junit4]    > Caused by: java.util.concurrent.TimeoutException: last state: 
DocCollection(ScheduledMaintenanceTriggerTest_collection1//clusterstate.json/6)={

 


was (Author: akjain):
I ran the test suite and couple of tests seem to fail. Though, they don't seem 
to have anything to do with my change:

 

   [junit4] Tests with failures [seed: 1D3ADDF6AE377902]:

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

   [junit4]

   [junit4]

   [junit4] JVM J0:     1.40 ..  4359.18 =  4357.78s

   [junit4] JVM J1:     1.40 ..  4359.35 =  4357.95s

   [junit4] JVM J2:     1.40 ..  4359.30 =  4357.90s

   [junit4] Execution time total: 1 hour 12 minutes 40 seconds

   [junit4] Tests summary: 833 suites (7 ignored), 4024 tests, 2 failures, 286 
ignored (153 assumptions)

 

Details for failing tests

 

 NOTE: reproduce with: ant test  

[ANNOUNCE] Apache PyLucene 7.6.0

2019-01-11 Thread Andi Vajda



I am pleased to announce the availability of Apache PyLucene 7.6.0.

Apache PyLucene, a subproject of Apache Lucene, is a Python extension for
accessing Apache Lucene Core. Its goal is to allow you to use Lucene's text
indexing and searching capabilities from Python. It is API compatible with
the latest version of Lucene 7.x Core, 7.6.0.

For changes in this release, please review:
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_7_6_0/CHANGES
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_7_6_0/jcc/CHANGES
http://lucene.apache.org/core/7_6_0/changes/Changes.html

Apache PyLucene is available from the following download page:
http://www.apache.org/dyn/closer.cgi/lucene/pylucene/pylucene-7.6.0-src.tar.gz

When downloading from a mirror site, please remember to verify the downloads
using signatures found on the Apache site:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS

For more information on Apache PyLucene, visit the project home page:
  http://lucene.apache.org/pylucene

Andi..


Re: [VOTE] Release PyLucene 7.6.0 (rc1) (fwd)

2019-01-11 Thread Andi Vajda



On Fri, 11 Jan 2019, Adrien Grand wrote:


+1


Thank you, Adrien, this vote has now passed !

Andi..



On Fri, Jan 11, 2019 at 10:34 AM Andi Vajda  wrote:



  Dear Lucene PMC,

As per Apache release rules, three PMC votes are necessary to make an
official release. The PyLucene 7.6.0 release vote needs one more PMC vote
to be effective.

Please, consider voting for releasing the PyLucene 7.6.0 rc1 candidate
announced below.

Thank you !

Andi..

-- Forwarded message --
Date: Fri, 4 Jan 2019 13:59:31 -0800 (PST)
From: Andi Vajda 
To: pylucene-dev@lucene.apache.org
Cc: gene...@lucene.apache.org
Subject: [VOTE] Release PyLucene 7.6.0 (rc1)


The PyLucene 7.6.0 (rc1) release tracking the recent release of
Apache Lucene 7.6.0 is ready.

A release candidate is available from:
   https://dist.apache.org/repos/dist/dev/lucene/pylucene/7.6.0-rc1/

PyLucene 7.6.0 is built with JCC 3.4 included in these release artifacts.

JCC 3.4 supports Python 3.3+ (in addition to Python 2.3+).
PyLucene may be built with Python 2 or Python 3.

Please vote to release these artifacts as PyLucene 7.6.0.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS

pps: here is my +1




--
Adrien



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-10.0.1) - Build # 17 - Still Unstable!

2019-01-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/17/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerRolesTest.testOverseerRole

Error Message:
Timed out waiting for overseer state change

Stack Trace:
java.lang.AssertionError: Timed out waiting for overseer state change
at 
__randomizedtesting.SeedInfo.seed([C70CCF665804FFEE:26C732F263B7C93F]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.OverseerRolesTest.waitForNewOverseer(OverseerRolesTest.java:63)
at 
org.apache.solr.cloud.OverseerRolesTest.testOverseerRole(OverseerRolesTest.java:145)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13037 lines...]
   [junit4] Suite: org.apache.solr.cloud.OverseerRolesTest
   [junit4]   2> Creating dataDir: 

Re: Welcome Nick Knize to the PMC

2019-01-11 Thread Tomás Fernández Löbbe
Welcome Nick!

On Fri, Jan 11, 2019 at 11:51 AM Joel Bernstein  wrote:

> Welcome Nick!
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
>
> On Fri, Jan 11, 2019 at 2:24 PM Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> Welcome Nick!!
>>
>> Mike
>>
>> On Wed, Jan 9, 2019 at 10:12 AM Adrien Grand  wrote:
>>
>>> I am pleased to announce that Nick Knize has accepted the PMC's
>>> invitation to join.
>>>
>>> Welcome Nick!
>>>
>>> --
>>> Adrien
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>> --
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>


[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-11 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740855#comment-16740855
 ] 

Ankit Jain commented on LUCENE-8635:


I ran the test suite and couple of tests seem to fail. Though, they don't seem 
to have anything to do with my change:

 

   [junit4] Tests with failures [seed: 1D3ADDF6AE377902]:

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

   [junit4]   - 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

   [junit4]

   [junit4]

   [junit4] JVM J0:     1.40 ..  4359.18 =  4357.78s

   [junit4] JVM J1:     1.40 ..  4359.35 =  4357.95s

   [junit4] JVM J2:     1.40 ..  4359.30 =  4357.90s

   [junit4] Execution time total: 1 hour 12 minutes 40 seconds

   [junit4] Tests summary: 833 suites (7 ignored), 4024 tests, 2 failures, 286 
ignored (153 assumptions)

 

Details for failing tests

 

 NOTE: reproduce with: ant test  -Dtestcase=ScheduledTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=1D3ADDF6AE377902 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=mr-IN -Dtests.timezone=America/St_Lucia 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

   [junit4] FAILURE 9.03s J2 | ScheduledTriggerTest.testTrigger <<<

   [junit4]    > Throwable #1: java.lang.AssertionError: expected:<3> but 
was:<2>

   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([1D3ADDF6AE377902:7EF1EB7437F80A2F]:0)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:113)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:66)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   [junit4]    >        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   [junit4]    >        at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)

   [junit4]    >        at java.base/java.lang.Thread.run(Thread.java:844)

 

NOTE: reproduce with: ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=1D3ADDF6AE377902 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ha 
-Dtests.timezone=America/Nome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

   [junit4] FAILURE 2.01s J0 | 
ScheduledMaintenanceTriggerTest.testInactiveShardCleanup <<<

at __randomizedtesting.SeedInfo.seed([1D3ADDF6AE377902:161D84CF745E09]:0)

   [junit4]    >        at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)

   [junit4]    >        at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup(ScheduledMaintenanceTriggerTest.java:167)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   [junit4]    >        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   [junit4]    >        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   [junit4]    >        at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)

   [junit4]    >        at java.base/java.lang.Thread.run(Thread.java:844)

   [junit4]    > Caused by: java.util.concurrent.TimeoutException: last state: 
DocCollection(ScheduledMaintenanceTriggerTest_collection1//clusterstate.json/6)={

 

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: offheap.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # 

[jira] [Commented] (LUCENE-8585) Create jump-tables for DocValues at index-time

2019-01-11 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740841#comment-16740841
 ] 

Adrien Grand commented on LUCENE-8585:
--

Thanks Toke for all the iterations. There are still some comments with minor 
suggestions that would be good to get in, but it looks good to me in general.

bq. there are already block-spanning tests in place for the lucene 80 codec, so 
this is "just" about coverage

Do these tests also exercise the logic that only uses jump tables if the target 
doc ID is sufficiently far forward? Let's remove commented out tests in the 
base test case?

> Create jump-tables for DocValues at index-time
> --
>
> Key: LUCENE-8585
> URL: https://issues.apache.org/jira/browse/LUCENE-8585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: performance
> Attachments: LUCENE-8585.patch, LUCENE-8585.patch, 
> make_patch_lucene8585.sh
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible for creating 
> index-structures. I'm quite willing to take a stab at this, although I 
> probably won't do much about it before January 2019. Should anyone else wish 
> to adopt this JIRA-issue or co-work on it, I'll be happy to share.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #525: LUCENE-8585: Index-time jump-tables for DocValues

2019-01-11 Thread GitBox
jpountz commented on a change in pull request #525: LUCENE-8585: Index-time 
jump-tables for DocValues
URL: https://github.com/apache/lucene-solr/pull/525#discussion_r247271502
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/codecs/lucene80/IndexedDISI.java
 ##
 @@ -0,0 +1,626 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene80;
+
+import java.io.DataInput;
+import java.io.IOException;
+
+import org.apache.lucene.search.DocIdSetIterator;
+import org.apache.lucene.store.IndexInput;
+import org.apache.lucene.store.IndexOutput;
+import org.apache.lucene.store.RandomAccessInput;
+import org.apache.lucene.util.ArrayUtil;
+import org.apache.lucene.util.BitSetIterator;
+import org.apache.lucene.util.FixedBitSet;
+import org.apache.lucene.util.RoaringDocIdSet;
+
+/**
+ * Disk-based implementation of a {@link DocIdSetIterator} which can return
+ * the index of the current document, i.e. the ordinal of the current document
+ * among the list of documents that this iterator can return. This is useful
+ * to implement sparse doc values by only having to encode values for documents
+ * that actually have a value.
+ * Implementation-wise, this {@link DocIdSetIterator} is inspired of
+ * {@link RoaringDocIdSet roaring bitmaps} and encodes ranges of {@code 65536}
+ * documents independently and picks between 3 encodings depending on the
+ * density of the range:
+ *   {@code ALL} if the range contains 65536 documents exactly,
+ *   {@code DENSE} if the range contains 4096 documents or more; in that
+ *   case documents are stored in a bit set,
+ *   {@code SPARSE} otherwise, and the lower 16 bits of the doc IDs are
+ *   stored in a {@link DataInput#readShort() short}.
+ * 
+ * Only ranges that contain at least one value are encoded.
+ * This implementation uses 6 bytes per document in the worst-case, which 
happens
+ * in the case that all ranges contain exactly one document.
+ *
+ * 
+ * To avoid O(n) lookup time complexity, with n being the number of documents, 
two lookup
+ * tables are used: A lookup table for block offset and index, and a rank 
structure
+ * for DENSE block index lookups.
+ *
+ * The lookup table is an array of {@code int}-pairs, with a pair for each 
block. It allows for
+ * direct jumping to the block, as opposed to iteration from the current 
position and forward
+ * one block at a time.
+ *
+ * Each int-pair entry consists of 2 logical parts:
+ *
+ * The first 32 bit int holds the index (number of set bits in the blocks) up 
to just before the
+ * wanted block. The maximum number of set bits is the maximum number of 
documents, which is < 2^31.
+ *
+ * The next int holds the offset in bytes into the underlying slice. As there 
is a maximum of 2^16
+ * blocks, it follows that the maximum size of any block must not exceed 2^15 
bytes to avoid
+ * overflow (2^16 bytes if the int is treated as unsigned). This is currently 
the case, with the
+ * largest block being DENSE and using 2^13 + 36 bytes.
+ *
+ * The cache overhead is numDocs/1024 bytes.
+ *
+ * Note: There are 4 types of blocks: ALL, DENSE, SPARSE and non-existing (0 
set bits).
+ * In the case of non-existing blocks, the entry in the lookup table has index 
equal to the
+ * previous entry and offset equal to the next non-empty block.
+ *
+ * The block lookup table is stored at the end of the total block structure.
+ *
+ *
+ * The rank structure for DENSE blocks is an array of byte-pairs with an entry 
for each
+ * sub-block (default 512 bits) out of the 65536 bits in the outer DENSE block.
+ *
+ * Each rank-entry states the number of set bits within the block up to the 
bit before the
+ * bit positioned at the start of the sub-block.
+ * Note that that the rank entry of the first sub-block is always 0 and that 
the last entry can
+ * at most be 65536-2 = 65634 and thus will always fit into an byte-pair of 16 
bits.
+ *
+ * The rank structure for a given DENSE block is stored at the beginning of 
the DENSE block.
+ * This ensures locality and keeps logistics simple.
+ *
+ * @lucene.internal
+ */
+final class IndexedDISI extends DocIdSetIterator {
+
+  // jump-table time/space trade-offs to consider:
+  // 

[GitHub] jpountz commented on a change in pull request #525: LUCENE-8585: Index-time jump-tables for DocValues

2019-01-11 Thread GitBox
jpountz commented on a change in pull request #525: LUCENE-8585: Index-time 
jump-tables for DocValues
URL: https://github.com/apache/lucene-solr/pull/525#discussion_r247271312
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/codecs/lucene80/IndexedDISI.java
 ##
 @@ -0,0 +1,626 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene80;
+
+import java.io.DataInput;
+import java.io.IOException;
+
+import org.apache.lucene.search.DocIdSetIterator;
+import org.apache.lucene.store.IndexInput;
+import org.apache.lucene.store.IndexOutput;
+import org.apache.lucene.store.RandomAccessInput;
+import org.apache.lucene.util.ArrayUtil;
+import org.apache.lucene.util.BitSetIterator;
+import org.apache.lucene.util.FixedBitSet;
+import org.apache.lucene.util.RoaringDocIdSet;
+
+/**
+ * Disk-based implementation of a {@link DocIdSetIterator} which can return
+ * the index of the current document, i.e. the ordinal of the current document
+ * among the list of documents that this iterator can return. This is useful
+ * to implement sparse doc values by only having to encode values for documents
+ * that actually have a value.
+ * Implementation-wise, this {@link DocIdSetIterator} is inspired of
+ * {@link RoaringDocIdSet roaring bitmaps} and encodes ranges of {@code 65536}
+ * documents independently and picks between 3 encodings depending on the
+ * density of the range:
+ *   {@code ALL} if the range contains 65536 documents exactly,
+ *   {@code DENSE} if the range contains 4096 documents or more; in that
+ *   case documents are stored in a bit set,
+ *   {@code SPARSE} otherwise, and the lower 16 bits of the doc IDs are
+ *   stored in a {@link DataInput#readShort() short}.
+ * 
+ * Only ranges that contain at least one value are encoded.
+ * This implementation uses 6 bytes per document in the worst-case, which 
happens
+ * in the case that all ranges contain exactly one document.
+ *
+ * 
+ * To avoid O(n) lookup time complexity, with n being the number of documents, 
two lookup
+ * tables are used: A lookup table for block offset and index, and a rank 
structure
+ * for DENSE block index lookups.
+ *
+ * The lookup table is an array of {@code int}-pairs, with a pair for each 
block. It allows for
+ * direct jumping to the block, as opposed to iteration from the current 
position and forward
+ * one block at a time.
+ *
+ * Each int-pair entry consists of 2 logical parts:
+ *
+ * The first 32 bit int holds the index (number of set bits in the blocks) up 
to just before the
+ * wanted block. The maximum number of set bits is the maximum number of 
documents, which is < 2^31.
+ *
+ * The next int holds the offset in bytes into the underlying slice. As there 
is a maximum of 2^16
+ * blocks, it follows that the maximum size of any block must not exceed 2^15 
bytes to avoid
+ * overflow (2^16 bytes if the int is treated as unsigned). This is currently 
the case, with the
+ * largest block being DENSE and using 2^13 + 36 bytes.
+ *
+ * The cache overhead is numDocs/1024 bytes.
+ *
+ * Note: There are 4 types of blocks: ALL, DENSE, SPARSE and non-existing (0 
set bits).
+ * In the case of non-existing blocks, the entry in the lookup table has index 
equal to the
+ * previous entry and offset equal to the next non-empty block.
+ *
+ * The block lookup table is stored at the end of the total block structure.
+ *
+ *
+ * The rank structure for DENSE blocks is an array of byte-pairs with an entry 
for each
+ * sub-block (default 512 bits) out of the 65536 bits in the outer DENSE block.
+ *
+ * Each rank-entry states the number of set bits within the block up to the 
bit before the
+ * bit positioned at the start of the sub-block.
+ * Note that that the rank entry of the first sub-block is always 0 and that 
the last entry can
+ * at most be 65536-2 = 65634 and thus will always fit into an byte-pair of 16 
bits.
+ *
+ * The rank structure for a given DENSE block is stored at the beginning of 
the DENSE block.
+ * This ensures locality and keeps logistics simple.
+ *
+ * @lucene.internal
+ */
+final class IndexedDISI extends DocIdSetIterator {
+
+  // jump-table time/space trade-offs to consider:
+  // 

[jira] [Commented] (LUCENE-8634) LatLonShape: Query with the same polygon that is indexed might not match

2019-01-11 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740808#comment-16740808
 ] 

Lucene/Solr QA commented on LUCENE-8634:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
41s{color} | {color:green} sandbox in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8634 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954596/LUCENE-8634.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / dcc9ffe |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/150/testReport/ |
| modules | C: lucene/sandbox U: lucene/sandbox |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/150/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> LatLonShape: Query with the same polygon that is indexed might not match
> 
>
> Key: LUCENE-8634
> URL: https://issues.apache.org/jira/browse/LUCENE-8634
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 8.0, 7.7, master (9.0)
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8634.patch, LUCENE-8634.patch
>
>
> If a polygon with a degenerated dimension is indexed and then an intersect 
> query is performed with the same polygon, it might result in an empty result. 
> For example this polygon with degenerated longitude:
> POLYGON((1.401298464324817E-45 22.0, 1.401298464324817E-45 69.0, 
> 4.8202184588118395E-40 69.0, 4.8202184588118395E-40 22.0, 
> 1.401298464324817E-45 22.0))
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 45 - Still Failing

2019-01-11 Thread Apache Jenkins Server
Build: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/45/

4 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=7696, name=Thread-2047, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=7696, name=Thread-2047, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
at 
__randomizedtesting.SeedInfo.seed([CE7B6F721C0D523A:462F50A8B2F13FC2]:0)
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:35806//control_collection
at __randomizedtesting.SeedInfo.seed([CE7B6F721C0D523A]:0)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:638)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: 
http://127.0.0.1:35806//control_collection
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:256)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:636)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:549)
... 5 more


FAILED:  org.apache.solr.cloud.RestartWhileUpdatingTest.test

Error Message:
There are still nodes recoverying - waited for 320 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 320 
seconds
at 
__randomizedtesting.SeedInfo.seed([CE7B6F721C0D523A:462F50A8B2F13FC2]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:195)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:1038)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1595)
at 
org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-11 Thread David Smiley
On Fri, Jan 11, 2019 at 3:14 PM Steve Rowe  wrote:

> +1 to ask Infra for an auto redirect for the links in all the existing
> JIRA comments.
>
>
+1 to that!

Please post the JIRA INFRA link here so we can follow.  Alex, if you're too
busy to get to it than I will.  Hopefully just a <=15min thing.

~ David
-- 
Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Reopened] (SOLR-5211) updating parent as childless makes old children orphans

2019-01-11 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reopened SOLR-5211:


Reopening to ensure we _do something_.  If there's no further discussion soon 
then I'll update our documentation to point out that delete-by-id is limited to 
not work for child documents.

> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, 6.0
>Reporter: Mikhail Khludnev
>Assignee: David Smiley
>Priority: Blocker
> Fix For: 8.0
>
> Attachments: SOLR-5211.patch, SOLR-5211.patch, SOLR-5211.patch, 
> SOLR-5211.patch, SOLR-5211.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13051) Improve TimeRoutedAlias preemptive create to play nice with tests

2019-01-11 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740784#comment-16740784
 ] 

David Smiley commented on SOLR-13051:
-

Avoid the CHANGES.txt IMO.  I don't think there's a policy here.  I might be 
more inclined to if the person who substantively did the change were a 
contributor, so as to give a bit more recognition, but IMO it's better for 
CHANGES.txt to be a useful document, and an issue like this is noise that 
dilutes the usefulness.

BTW thanks for working on this and I like {{SolrCore.runAsync}}

> Improve TimeRoutedAlias preemptive create to play nice with tests
> -
>
> Key: SOLR-13051
> URL: https://issues.apache.org/jira/browse/SOLR-13051
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.0, 7.7
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Attachments: SOLR-13051.patch, SOLR-13051.patch
>
>
> SOLR-12801 added AwaitsFix to TimeRoutedAliasUpdateProcessorTest.  This 
> ticket will fix the test to not require a sleep statement and remove the 
> AwaitsFix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-11 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740764#comment-16740764
 ] 

Michael McCandless commented on LUCENE-8635:


Also, have you confirmed that all tests pass when you switch to off heap FST 
storage always?

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: offheap.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-11 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740757#comment-16740757
 ] 

Michael McCandless commented on LUCENE-8635:


Wow, this is impressive!  Surprising how small the change was – basically 
opening up the FST BytesStore API a bit so that we could have an impl that 
wraps an {{IndexInput}} (reading backwards) instead of a {{byte[]}}.

Can you copy/paste the rally results out of Excel here?  I'm curious what 
search-time impact you're seeing.  If it not too much of an impact maybe we 
should consider just moving FSTs off-heap in the default codec?  We've done 
similar things recently for Lucene ... e.g. moving norms off heap.

I'll run Lucene's wikipedia benchmarks to measure the impact from our standard 
benchmarks (the nightly Lucene benchmarks).

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: offheap.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-11 Thread Steve Rowe
Looks like recent JIRA auto-posts include gitbox links, e.g. 
https://issues.apache.org/jira/browse/SOLR-13051?focusedCommentId=16740737=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16740737

+1 to ask Infra for an auto redirect for the links in all the existing JIRA 
comments.

--
Steve

> On Jan 11, 2019, at 2:32 PM, Alexandre Rafalovitch  wrote:
> 
> So, it seems that all automatic commit links in the JIRA issues are
> now going to 404
> 
> E.g. https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2aae3fb
> 
> Does it mean:
> 1) We just lost direct 'check the changes' capability
> 2) The script that posts these URLs need to be updated as well?
> 
> Any chance we can ask for automatic redirect to new (?) location for
> this request pattern?
> 
> Regards,
>   Alex.
> 
> On Thu, 10 Jan 2019 at 18:38, Uwe Schindler  wrote:
>> 
>> I changed all jobs on Policeman Jenkins with the following script in the 
>> Admin interface (thanks to Stackoverflow):
>> 
>> 
>> 
>> import hudson.plugins.git.*
>> 
>> import jenkins.*
>> 
>> import jenkins.model.*
>> 
>> 
>> 
>> def modifyGitUrl(url) {
>> 
>>  if (url=='git://git.apache.org/lucene-solr.git') {
>> 
>> return 'https://gitbox.apache.org/repos/asf/lucene-solr.git';
>> 
>>  }
>> 
>>  return url;
>> 
>> }
>> 
>> 
>> 
>> Jenkins.instance.items.each {
>> 
>>  if (it.scm instanceof GitSCM) {
>> 
>>def oldScm = it.scm
>> 
>>def newUserRemoteConfigs = oldScm.userRemoteConfigs.collect {
>> 
>>  new UserRemoteConfig(modifyGitUrl(it.url), it.name, it.refspec, 
>> it.credentialsId)
>> 
>>}
>> 
>>def newScm = new GitSCM(newUserRemoteConfigs, oldScm.branches, 
>> oldScm.doGenerateSubmoduleConfigurations,
>> 
>>oldScm.submoduleCfg, oldScm.browser, 
>> oldScm.gitTool, oldScm.extensions)
>> 
>>it.scm = newScm
>> 
>>it.save()
>> 
>>  }
>> 
>> }
>> 
>> 
>> 
>> -
>> 
>> Uwe Schindler
>> 
>> Achterdiek 19, D-28357 Bremen
>> 
>> http://www.thetaphi.de
>> 
>> eMail: u...@thetaphi.de
>> 
>> 
>> 
>> From: Uwe Schindler 
>> Sent: Thursday, January 10, 2019 11:38 PM
>> To: dev@lucene.apache.org
>> Subject: Re: [NOTICE] Mandatory migration of git repositories to 
>> gitbox.apache.org
>> 
>> 
>> 
>> Ok, thanks Steve for figuring that out. I will fix Policeman.
>> 
>> Uwe
>> 
>> Am January 10, 2019 10:25:42 PM UTC schrieb Steve Rowe :
>> 
>> Thanks for bringing this up Cassandra.
>> 
>> I looked at all the ASF Jenkins jobs' configs yesterday, and they all point 
>> to the read-only mirror at git://git.apache.org/lucene-solr.git .  Yesterday 
>> I thought that this would continue to mirror the new gitbox repo, but I 
>> guess not?  http://git.apache.org no longer lists lucene-solr.git , but the 
>> mirror still appears to exist.
>> 
>> Infra's Daniel Gruno commented yesterday on 
>> https://issues.apache.org/jira/browse/INFRA-17593 : "git.a.o should never be 
>> used or trusted as canonical, please. the mirror is _not_ listed on 
>> git.apache.org and is not updated there. Use gitbox or github, either will 
>> suffice - gitbox is a bit beefier than git-wip was, so it'll play along."
>> 
>> On https://issues.apache.org/jira/browse/INFRA-17526 Daniel wrote that he 
>> removed Hadoop's hidden -- i.e. no longer advertized at 
>> http://git.apache.org/ -- read-only git.a.o mirror at the project's request. 
>>  Should we ask for the same thing?
>> 
>> I'll go fix all the Lucene/Solr jobs' configs on ASF and my Jenkins to point 
>> to the gitbox.a.o repo.
>> 
>> Steve
>> 
>> On Jan 10, 2019, at 4:28 PM, Cassandra Targett  wrote:
>> 
>> This was done yesterday, but it appears that our Jenkins jobs need to be 
>> updated? I looked at a couple of Ref Guide builds and doc changes I made 
>> yesterday and today aren’t showing up, but it’s hard for me to tell when 
>> looking at the other jobs for artifacts to know if that’s true for all the 
>> jobs or not.
>> 
>> Could someone who knows these jobs check?
>> 
>> I think we have some wiki docs that need to be updated with the new repo 
>> address. I’ll get to it eventually unless someone has time sooner.
>> 
>> Thanks,
>> Cassandra
>> On Jan 3, 2019, 10:23 AM -0600, David Smiley , 
>> wrote:
>> 
>> https://issues.apache.org/jira/browse/INFRA-17534
>> Good questions Erick; please post as a comment to the issue.
>> 
>> On Thu, Jan 3, 2019 at 11:21 AM Erick Erickson  
>> wrote:
>> +1 and thanks!
>> 
>> Any time works for me. I assume we'll get some idea of when it'll
>> happen, I'm also assuming that the git-wip-us.apache.org will just
>> completely stop working so there's no chance of pushing something to
>> the wrong place?
>> 
>> 
>> On Thu, Jan 3, 2019 at 8:15 AM David Smiley  wrote:
>> 
>> 
>> I agree with Uwe's sentiment.  Essentially anywhere in your git remote 
>> configuration that refers to git-wip-us.apache.org will need to change to 
>> gitbox.apache.org  open up .git/config to see what I mean.  At your 
>> prerogative, you may instead work with 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 953 - Still Unstable!

2019-01-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/953/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [ZkStateReader, 
SolrZkClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.cloud.ZkStateReader  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:328)  
at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:827)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:950)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:997)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)  at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)  at 
org.apache.solr.client.solrj.impl.SolrClientCloudManager.request(SolrClientCloudManager.java:115)
  at 
org.apache.solr.cloud.autoscaling.SystemLogListener.onEvent(SystemLogListener.java:126)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:837)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$add$3(ScheduledTriggers.java:327)
  at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.cloud.SolrZkClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:203)  
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:126)  at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)  at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:306)  at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:827)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:950)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:997)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)  at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)  at 
org.apache.solr.client.solrj.impl.SolrClientCloudManager.request(SolrClientCloudManager.java:115)
  at 
org.apache.solr.cloud.autoscaling.SystemLogListener.onEvent(SystemLogListener.java:126)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:837)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$add$3(ScheduledTriggers.java:327)
  at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.base/java.lang.Thread.run(Thread.java:844)   expected null, but 
was:(ZkStateReader.java:328)  
at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:827)
  at 

[jira] [Commented] (SOLR-13051) Improve TimeRoutedAlias preemptive create to play nice with tests

2019-01-11 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740741#comment-16740741
 ] 

Gus Heck commented on SOLR-13051:
-

Slight mistake on first commit message (for master), the one for 8x is what I 
really meant. Not sure If I should put this on 7x as well? Also, a procedural 
question...  Should a unit test change like this get an entry in CHANGES.txt? 
My feeling is it's just noise for someone trying to figure out what changed 
between versions.

> Improve TimeRoutedAlias preemptive create to play nice with tests
> -
>
> Key: SOLR-13051
> URL: https://issues.apache.org/jira/browse/SOLR-13051
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.0, 7.7
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Attachments: SOLR-13051.patch, SOLR-13051.patch
>
>
> SOLR-12801 added AwaitsFix to TimeRoutedAliasUpdateProcessorTest.  This 
> ticket will fix the test to not require a sleep statement and remove the 
> AwaitsFix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13051) Improve TimeRoutedAlias preemptive create to play nice with tests

2019-01-11 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740737#comment-16740737
 ] 

ASF subversion and git services commented on SOLR-13051:


Commit 0f1da2bc14ef1bb79e21558247ef3c72e802924e in lucene-solr's branch 
refs/heads/branch_8x from Gus Heck
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0f1da2b ]

SOLR-13051 improve TRA update processor test
  - remove need to Thread.sleep()
  - better async mechanism linked to SolrCore lifecycle
  - add some additional tests to be a bit more thorough


> Improve TimeRoutedAlias preemptive create to play nice with tests
> -
>
> Key: SOLR-13051
> URL: https://issues.apache.org/jira/browse/SOLR-13051
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.0, 7.7
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Attachments: SOLR-13051.patch, SOLR-13051.patch
>
>
> SOLR-12801 added AwaitsFix to TimeRoutedAliasUpdateProcessorTest.  This 
> ticket will fix the test to not require a sleep statement and remove the 
> AwaitsFix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Nick Knize to the PMC

2019-01-11 Thread Joel Bernstein
Welcome Nick!

Joel Bernstein
http://joelsolr.blogspot.com/


On Fri, Jan 11, 2019 at 2:24 PM Michael McCandless <
luc...@mikemccandless.com> wrote:

> Welcome Nick!!
>
> Mike
>
> On Wed, Jan 9, 2019 at 10:12 AM Adrien Grand  wrote:
>
>> I am pleased to announce that Nick Knize has accepted the PMC's
>> invitation to join.
>>
>> Welcome Nick!
>>
>> --
>> Adrien
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>> --
> Mike McCandless
>
> http://blog.mikemccandless.com
>


[jira] [Commented] (SOLR-13051) Improve TimeRoutedAlias preemptive create to play nice with tests

2019-01-11 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740724#comment-16740724
 ] 

ASF subversion and git services commented on SOLR-13051:


Commit dcc9ffe186eb1873fcebc56382e3be34245b0ecc in lucene-solr's branch 
refs/heads/master from Gus Heck
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=dcc9ffe ]

SOLR-13051 improve TRA update processor test
  - remove some timeouts
  - better async mechanism linked to SolrCore lifecycle
  - add some additional tests to be a bit more thorough


> Improve TimeRoutedAlias preemptive create to play nice with tests
> -
>
> Key: SOLR-13051
> URL: https://issues.apache.org/jira/browse/SOLR-13051
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.0, 7.7
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Attachments: SOLR-13051.patch, SOLR-13051.patch
>
>
> SOLR-12801 added AwaitsFix to TimeRoutedAliasUpdateProcessorTest.  This 
> ticket will fix the test to not require a sleep statement and remove the 
> AwaitsFix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-11393) Unable to index field names in JSON

2019-01-11 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-11393.


> Unable to index field names in JSON
> ---
>
> Key: SOLR-11393
> URL: https://issues.apache.org/jira/browse/SOLR-11393
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.6.1
>Reporter: Cheburakshu
>Priority: Major
>
> I am not able to index documents with below field names in JSON doc.
> config_os_version
> location_region
> custom_var_v2
> deleted
> I get the below error
> ERROR: [doc=29128e37-c6d9-4d2b-814e-1d42f84be9b5] Error adding field 
> 'location_region'='test' msg=For input string: "test"
> The input given in admin UI /update endpoint is 
> {"location_region":"test"}
> Same error was encountered for other field names as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5283) Admin UI issues in IE7

2019-01-11 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-5283.
---

> Admin UI issues in IE7
> --
>
> Key: SOLR-5283
> URL: https://issues.apache.org/jira/browse/SOLR-5283
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Affects Versions: 4.4
> Environment: IE Version 7.0.5730.11 64-bit edition.
>Reporter: Erik Hatcher
>Priority: Minor
>
> A customer of ours reported:
> {code}
> IE Version 7.0.5730.11 64-bit edition.
> Result:
> Left nav area displays;
> Main area: spinning loading icon displaying the word Loading ...
> Script errors on page:
> Line: 8
> Char: 3
> Error: 'CSSStyleDeclaration' is undefined
> Code: 0
> URL: http://:/solr/js/lib/d3.js
> Line: 17
> Char: 5
> Error: Unexpected call to method or property access.
> Code: 0
> URL : http://:/solr/js/require.js
> {code}
> I've tried replicating this in a Windows virtual machine, but only have IE10 
> and have not seen this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11795) Add Solr metrics exporter for Prometheus

2019-01-11 Thread Bill Vandenberk (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740716#comment-16740716
 ] 

Bill Vandenberk commented on SOLR-11795:


I'd love to start using this, but it seems to not support auth in zk!

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Fix For: 7.3, 8.0
>
> Attachments: SOLR-11795-10.patch, SOLR-11795-11.patch, 
> SOLR-11795-2.patch, SOLR-11795-3.patch, SOLR-11795-4.patch, 
> SOLR-11795-5.patch, SOLR-11795-6.patch, SOLR-11795-7.patch, 
> SOLR-11795-8.patch, SOLR-11795-9.patch, SOLR-11795-dev-tools.patch, 
> SOLR-11795-ref-guide.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7858) Make Angular UI default

2019-01-11 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-7858.
---

> Make Angular UI default
> ---
>
> Key: SOLR-7858
> URL: https://issues.apache.org/jira/browse/SOLR-7858
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Reporter: Upayavira
>Assignee: Upayavira
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-7858-2.patch, SOLR-7858-3.patch, SOLR-7858-4.patch, 
> SOLR-7858-fix.patch, SOLR-7858.patch, new ui link.png, original UI link.png
>
>
> Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, 
> it should function well in most cases. I propose that, as soon as 5.3 has 
> been released, we make the Angular UI default, ready for the 5.4 release. We 
> can then fix any more bugs as they are found, but more importantly start 
> working on the features that were the reason for doing this work in the first 
> place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-13018) In solr-cloud mode, It throws an error when i create a collection with schema that has fieldType containing openNLP tokenizer and filters

2019-01-11 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-13018.


> In solr-cloud mode, It throws an error when i create a collection with schema 
> that has fieldType containing openNLP tokenizer and filters
> -
>
> Key: SOLR-13018
> URL: https://issues.apache.org/jira/browse/SOLR-13018
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 7.3.1
>Reporter: Parmeshwor Thapa
>Priority: Major
>
> Here is schema for field:
> {code:java}
> 
>   
>      tokenizerModel="en-token.bin"        sentenceModel="en-sent.bin"/>
>     
>      posTaggerModel="en-pos-maxent.bin"/>
>      dictionary="en-lemmatizer.txt"/>
> 
>     
>     
>   
> 
> {code}
> I have configset with all the files(en-token.bin, en-sent.bin, ...) in same 
> directory. Using that configset i can successfully create Solr Core in 
> Standalone mode.
> But With Solr cloud (two instances in separate servers orchestrated by  
> zookeeper) i have the same configset in both servers and i try to create  a  
> collection, it is throwing me an error which doesn't make any sense to me.
> {code:java}
>  $ bin/solr create -p 8984 -c  xyz -n xyz_conf -d xyz_conf
> ... ERROR: Failed to create collection 'xyz' due to: 
> {example1.com:8984_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at https://example2.com:8984/solr: Error CREATEing SolrCore 
> 'xyz_shard1_replica_n1': Unable to create core [xyz_shard1_replica_n1] Caused 
> by: Can't find resource 'solrconfig.xml' in classpath or '/configs/xyz', 
> cwd=/opt/solr-7.3.1/server}
> {code}
>  
>   
> Note: uploading configset to zookeeper also fails with error
> {code:java}
> $ bin/solr create -c xyz  -n xyz_conf -d xyz_conf
> ...
> —
> ERROR: Error uploading file 
> /opt/solr/server/solr/configsets/xyz/conf/en-pos-maxent.bin to zookeeper path 
> /configs/xyz/en-pos-maxent.bin
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-12600) Parameters mapping for query parameters to JSON query

2019-01-11 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-12600.


> Parameters mapping for query parameters to JSON query
> -
>
> Key: SOLR-12600
> URL: https://issues.apache.org/jira/browse/SOLR-12600
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.4
>Reporter: Renuka Srishti
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 7.6
>
>
> Paramters mapping mentioned here is not right.
> start, rows works for standard query parameters.
> offset, limit works for JSON query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-12956) Add @since javadoc tags to the Analyzer component classes

2019-01-11 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-12956.


> Add @since javadoc tags to the Analyzer component classes
> -
>
> Key: SOLR-12956
> URL: https://issues.apache.org/jira/browse/SOLR-12956
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.5
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 7.6
>
> Attachments: SOLR-12956.patch
>
>
> Continuing work started in SOLR-11490, add @since javadoc tags to all 
> Analyzer, Tokenizer, Char and Token filter classes that are used in the 
> fieldtype definitions.
> As per the previous guidance, earliest version tag applied will be 3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2767) ClassCastException when using FieldAnalysisResponse and the analyzer list contains the CharMappingFilter (or any CharFilter)

2019-01-11 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-2767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-2767.
-
Resolution: Duplicate

Should be resolved by SOLR-2834. Please open a new case if the issue still 
persists, as the underlying code has changed a lot since.

> ClassCastException when using FieldAnalysisResponse and the analyzer list 
> contains the CharMappingFilter (or any CharFilter)
> 
>
> Key: SOLR-2767
> URL: https://issues.apache.org/jira/browse/SOLR-2767
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.3, 4.0-ALPHA
>Reporter: Spyros Kapnissis
>Priority: Major
> Attachments: SOLR-2767.patch
>
>
> I use the FieldAnalysisResponse class in order to gather some information 
> about the analysis process. However, I get a ClassCastException (cannot 
> convert String to NamedList) thrown at 
> AnalysisResponseBase.buildPhases method if I have included the 
> CharMappingFilter in my configuration.
> It seems that a CharFilter does not create a NamedList, but a String 
> entry in the analysis response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8177) About AnalysisResponseBase => java.lang.String cannot be cast to java.util.List

2019-01-11 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-8177.
-
Resolution: Duplicate

Should be resolved by SOLR-2834. Please open a new case if the issue still 
persists, as the underlying code has changed a lot since.

> About AnalysisResponseBase =>  java.lang.String cannot be cast to 
> java.util.List
> 
>
> Key: SOLR-8177
> URL: https://issues.apache.org/jira/browse/SOLR-8177
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.1
> Environment: centos6.5
> jdk1.7
>Reporter: kim
>Priority: Major
>  Labels: easyfix
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> In FieldType , eg text  , if  i add a  charFilter  then , use 
> FieldAnalysisRequest 
> build analysis phases list will  lead  a  java.lang.String cannot be cast to 
> java.util.List  ,  i have seen the source code , it seems don't to solve if 
> have a charFilter  situation ,  
> AnyOne know why this is , please reply me !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [NOTICE] Mandatory migration of git repositories to gitbox.apache.org

2019-01-11 Thread Alexandre Rafalovitch
So, it seems that all automatic commit links in the JIRA issues are
now going to 404

E.g. https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2aae3fb

Does it mean:
1) We just lost direct 'check the changes' capability
2) The script that posts these URLs need to be updated as well?

Any chance we can ask for automatic redirect to new (?) location for
this request pattern?

Regards,
   Alex.

On Thu, 10 Jan 2019 at 18:38, Uwe Schindler  wrote:
>
> I changed all jobs on Policeman Jenkins with the following script in the 
> Admin interface (thanks to Stackoverflow):
>
>
>
> import hudson.plugins.git.*
>
> import jenkins.*
>
> import jenkins.model.*
>
>
>
> def modifyGitUrl(url) {
>
>   if (url=='git://git.apache.org/lucene-solr.git') {
>
>  return 'https://gitbox.apache.org/repos/asf/lucene-solr.git';
>
>   }
>
>   return url;
>
> }
>
>
>
> Jenkins.instance.items.each {
>
>   if (it.scm instanceof GitSCM) {
>
> def oldScm = it.scm
>
> def newUserRemoteConfigs = oldScm.userRemoteConfigs.collect {
>
>   new UserRemoteConfig(modifyGitUrl(it.url), it.name, it.refspec, 
> it.credentialsId)
>
> }
>
> def newScm = new GitSCM(newUserRemoteConfigs, oldScm.branches, 
> oldScm.doGenerateSubmoduleConfigurations,
>
> oldScm.submoduleCfg, oldScm.browser, 
> oldScm.gitTool, oldScm.extensions)
>
> it.scm = newScm
>
> it.save()
>
>   }
>
> }
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> From: Uwe Schindler 
> Sent: Thursday, January 10, 2019 11:38 PM
> To: dev@lucene.apache.org
> Subject: Re: [NOTICE] Mandatory migration of git repositories to 
> gitbox.apache.org
>
>
>
> Ok, thanks Steve for figuring that out. I will fix Policeman.
>
> Uwe
>
> Am January 10, 2019 10:25:42 PM UTC schrieb Steve Rowe :
>
> Thanks for bringing this up Cassandra.
>
> I looked at all the ASF Jenkins jobs' configs yesterday, and they all point 
> to the read-only mirror at git://git.apache.org/lucene-solr.git .  Yesterday 
> I thought that this would continue to mirror the new gitbox repo, but I guess 
> not?  http://git.apache.org no longer lists lucene-solr.git , but the mirror 
> still appears to exist.
>
> Infra's Daniel Gruno commented yesterday on 
> https://issues.apache.org/jira/browse/INFRA-17593 : "git.a.o should never be 
> used or trusted as canonical, please. the mirror is _not_ listed on 
> git.apache.org and is not updated there. Use gitbox or github, either will 
> suffice - gitbox is a bit beefier than git-wip was, so it'll play along."
>
> On https://issues.apache.org/jira/browse/INFRA-17526 Daniel wrote that he 
> removed Hadoop's hidden -- i.e. no longer advertized at 
> http://git.apache.org/ -- read-only git.a.o mirror at the project's request.  
> Should we ask for the same thing?
>
> I'll go fix all the Lucene/Solr jobs' configs on ASF and my Jenkins to point 
> to the gitbox.a.o repo.
>
> Steve
>
> On Jan 10, 2019, at 4:28 PM, Cassandra Targett  wrote:
>
> This was done yesterday, but it appears that our Jenkins jobs need to be 
> updated? I looked at a couple of Ref Guide builds and doc changes I made 
> yesterday and today aren’t showing up, but it’s hard for me to tell when 
> looking at the other jobs for artifacts to know if that’s true for all the 
> jobs or not.
>
> Could someone who knows these jobs check?
>
> I think we have some wiki docs that need to be updated with the new repo 
> address. I’ll get to it eventually unless someone has time sooner.
>
> Thanks,
> Cassandra
> On Jan 3, 2019, 10:23 AM -0600, David Smiley , 
> wrote:
>
> https://issues.apache.org/jira/browse/INFRA-17534
> Good questions Erick; please post as a comment to the issue.
>
> On Thu, Jan 3, 2019 at 11:21 AM Erick Erickson  
> wrote:
> +1 and thanks!
>
> Any time works for me. I assume we'll get some idea of when it'll
> happen, I'm also assuming that the git-wip-us.apache.org will just
> completely stop working so there's no chance of pushing something to
> the wrong place?
>
>
> On Thu, Jan 3, 2019 at 8:15 AM David Smiley  wrote:
>
>
>  I agree with Uwe's sentiment.  Essentially anywhere in your git remote 
> configuration that refers to git-wip-us.apache.org will need to change to 
> gitbox.apache.org  open up .git/config to see what I mean.  At your 
> prerogative, you may instead work with GitHub's mirror exclusively -- a new 
> option.  If you want to do that, see https://gitbox.apache.org which is 
> pretty helpful (do read it no matter what you do), and includes a link to the 
> "account linking page".  Personally, I intend to commit to gitbox but I will 
> also link my accounts as I suspect this will enable more direct use of the 
> GitHub website like closing old pull requests (unconfirmed).
>
>  On Thu, Jan 3, 2019 at 10:57 AM Alan Woodward  wrote:
>
>
>  +1, thanks for volunteering David!
>
>
>  On 3 Jan 2019, at 15:41, Jan Høydahl  wrote:
>
>  +1
>
>  --
>  Jan Høydahl, 

Re: Welcome Nick Knize to the PMC

2019-01-11 Thread Michael McCandless
Welcome Nick!!

Mike

On Wed, Jan 9, 2019 at 10:12 AM Adrien Grand  wrote:

> I am pleased to announce that Nick Knize has accepted the PMC's
> invitation to join.
>
> Welcome Nick!
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Mike McCandless

http://blog.mikemccandless.com


[jira] [Commented] (SOLR-2767) ClassCastException when using FieldAnalysisResponse and the analyzer list contains the CharMappingFilter (or any CharFilter)

2019-01-11 Thread Eckhard jost (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-2767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740669#comment-16740669
 ] 

Eckhard jost commented on SOLR-2767:


I think, it is already solved there.

> ClassCastException when using FieldAnalysisResponse and the analyzer list 
> contains the CharMappingFilter (or any CharFilter)
> 
>
> Key: SOLR-2767
> URL: https://issues.apache.org/jira/browse/SOLR-2767
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.3, 4.0-ALPHA
>Reporter: Spyros Kapnissis
>Priority: Major
> Attachments: SOLR-2767.patch
>
>
> I use the FieldAnalysisResponse class in order to gather some information 
> about the analysis process. However, I get a ClassCastException (cannot 
> convert String to NamedList) thrown at 
> AnalysisResponseBase.buildPhases method if I have included the 
> CharMappingFilter in my configuration.
> It seems that a CharFilter does not create a NamedList, but a String 
> entry in the analysis response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-11 Thread Ankit Jain (JIRA)
Ankit Jain created LUCENE-8635:
--

 Summary: Lazy loading Lucene FST offheap using mmap
 Key: LUCENE-8635
 URL: https://issues.apache.org/jira/browse/LUCENE-8635
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/FSTs
 Environment: I used below setup for es_rally tests:

single node i3.xlarge running ES 6.5

es_rally was running on another i3.xlarge instance
Reporter: Ankit Jain
 Attachments: offheap.patch, rally_benchmark.xlsx

Currently, FST loads all the terms into heap memory during index open. This 
causes frequent JVM OOM issues if the term size gets big. A better way of doing 
this will be to lazily load FST using mmap. That ensures only the required 
terms get loaded into memory.

 
Lucene can expose API for providing list of fields to load terms offheap. I'm 
planning to take following approach for this:
 # Add a boolean property fstOffHeap in FieldInfo
 # Pass list of offheap fields to lucene during index open (ALL can be special 
keyword for loading ALL fields offheap)
 # Initialize the fstOffHeap property during lucene index open
 # FieldReader invokes default FST constructor or OffHeap constructor based on 
fstOffHeap field

 
I created a patch (that loads all fields offheap), did some benchmarks using 
es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8177) About AnalysisResponseBase => java.lang.String cannot be cast to java.util.List

2019-01-11 Thread Eckhard jost (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740671#comment-16740671
 ] 

Eckhard jost commented on SOLR-8177:


I think, it is already solved there.

> About AnalysisResponseBase =>  java.lang.String cannot be cast to 
> java.util.List
> 
>
> Key: SOLR-8177
> URL: https://issues.apache.org/jira/browse/SOLR-8177
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.1
> Environment: centos6.5
> jdk1.7
>Reporter: kim
>Priority: Major
>  Labels: easyfix
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> In FieldType , eg text  , if  i add a  charFilter  then , use 
> FieldAnalysisRequest 
> build analysis phases list will  lead  a  java.lang.String cannot be cast to 
> java.util.List  ,  i have seen the source code , it seems don't to solve if 
> have a charFilter  situation ,  
> AnyOne know why this is , please reply me !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2834) SolrJ Field and Document Analyzes Response classes cannot parse CharFilter information

2019-01-11 Thread Eckhard jost (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740667#comment-16740667
 ] 

Eckhard jost commented on SOLR-2834:


Doesn't fix it also the Issues: SOR-2767, SOLR-8177.

May they can be also closed?

> SolrJ Field and Document Analyzes Response classes cannot parse CharFilter 
> information
> --
>
> Key: SOLR-2834
> URL: https://issues.apache.org/jira/browse/SOLR-2834
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, Schema and Analysis
>Affects Versions: 3.4, 3.6, 4.2, 7.4
>Reporter: Shane
>Assignee: Alexandre Rafalovitch
>Priority: Major
>  Labels: patch
> Fix For: 7.5
>
> Attachments: AnalysisResponseBase.patch, SOLR-2834.patch, 
> SOLR-2834.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> When using FieldAnalysisRequest.java to analysis a field, a 
> ClassCastExcpetion is thrown if the schema defines the filter 
> org.apache.solr.analysis.HTMLStripCharFilter.  The exception is:
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.util.List
>at 
> org.apache.solr.client.solrj.response.AnalysisResponseBase.buildPhases(AnalysisResponseBase.java:69)
>at 
> org.apache.solr.client.solrj.response.FieldAnalysisResponse.setResponse(FieldAnalysisResponse.java:66)
>at 
> org.apache.solr.client.solrj.request.FieldAnalysisRequest.process(FieldAnalysisRequest.java:107)
> My schema definition is:
> 
>   
> 
> 
> 
> 
> 
>   
> 
> The response is part is:
> 
>   testing 
> analysis
>   
> ...
> A simplistic fix would be to test if the Entry value is an instance of List.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-12-ea+23) - Build # 16 - Unstable!

2019-01-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/16/
Java: 64bit/jdk-12-ea+23 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateCollWithDefaultClusterPropertiesOldFormat

Error Message:
expected:<[2]> but was:<[null]>

Stack Trace:
org.junit.ComparisonFailure: expected:<[2]> but was:<[null]>
at 
__randomizedtesting.SeedInfo.seed([94C9B8CA63A77EB8:FE1A972F17CC5207]:0)
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateCollWithDefaultClusterPropertiesOldFormat(CollectionsAPISolrJTest.java:138)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)




Build Log:
[...truncated 2079 lines...]
   [junit4] JVM J2: stderr was not empty, see: 

[jira] [Commented] (SOLR-13051) Improve TimeRoutedAlias preemptive create to play nice with tests

2019-01-11 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740578#comment-16740578
 ] 

Gus Heck commented on SOLR-13051:
-

I think this has been hanging out long enough for comment, will commit after 
lunch if no objections. It needs to get in before I get going on SOLR-13131.

> Improve TimeRoutedAlias preemptive create to play nice with tests
> -
>
> Key: SOLR-13051
> URL: https://issues.apache.org/jira/browse/SOLR-13051
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.0, 7.7
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Attachments: SOLR-13051.patch, SOLR-13051.patch
>
>
> SOLR-12801 added AwaitsFix to TimeRoutedAliasUpdateProcessorTest.  This 
> ticket will fix the test to not require a sleep statement and remove the 
> AwaitsFix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+23) - Build # 23507 - Unstable!

2019-01-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23507/
Java: 64bit/jdk-12-ea+23 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
expected:<154> but was:<152>

Stack Trace:
java.lang.AssertionError: expected:<154> but was:<152>
at 
__randomizedtesting.SeedInfo.seed([924CAB582A056BF9:1A18948284F90601]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1070)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1042)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 1206 - Unstable

2019-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1206/

1 tests failed.
FAILED:  org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
__randomizedtesting.SeedInfo.seed([7D8F311E5634888C:BC47EC58F7525925]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:731)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:983)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:998)
at 
org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability(TestLBHttpSolrClient.java:221)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)

[jira] [Commented] (SOLR-13116) Add Admin UI login support for Kerberos

2019-01-11 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740479#comment-16740479
 ] 

Jason Gerlowski commented on SOLR-13116:


Just got a chance to test your patch.  Things look better (for Kerberos at 
least).  I've attached a screenshot showing the result:

 !improved_login_page.png! 

> Add Admin UI login support for Kerberos
> ---
>
> Key: SOLR-13116
> URL: https://issues.apache.org/jira/browse/SOLR-13116
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 8.0, 7.7
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.0, 7.7
>
> Attachments: SOLR-13116.patch, eventual_auth.png, 
> improved_login_page.png
>
>
> Spinoff from SOLR-7896. Kerberos auth plugin should get Admin UI Login 
> support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13116) Add Admin UI login support for Kerberos

2019-01-11 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-13116:
---
Attachment: improved_login_page.png

> Add Admin UI login support for Kerberos
> ---
>
> Key: SOLR-13116
> URL: https://issues.apache.org/jira/browse/SOLR-13116
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 8.0, 7.7
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.0, 7.7
>
> Attachments: SOLR-13116.patch, eventual_auth.png, 
> improved_login_page.png
>
>
> Spinoff from SOLR-7896. Kerberos auth plugin should get Admin UI Login 
> support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8633) Remove term weighting from interval scoring

2019-01-11 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740469#comment-16740469
 ] 

Jim Ferenczi commented on LUCENE-8633:
--

+1 to remove the terms statistics and to rely solely on the number and extent 
of the intervals. Choosing the pivot is really difficult though and cannot be 
computed statistically like the feature query does. Maybe we should have a 
default pivot of 1 and make it configurable in the constructor ? We could also 
make all feature functions available ? 

> Remove term weighting from interval scoring
> ---
>
> Key: LUCENE-8633
> URL: https://issues.apache.org/jira/browse/LUCENE-8633
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8633.patch
>
>
> IntervalScorer currently uses the same scoring mechanism as SpanScorer, 
> summing the IDF of all possibly matching terms from its parent 
> IntervalsSource and using that in conjunction with a sloppy frequency to 
> produce a similarity-based score.  This doesn't really make sense, however, 
> as it means that terms that don't appear in a document can still contribute 
> to the score, and appears to make scores from interval queries comparable 
> with scores from term or phrase queries when they really aren't.
> I'd like to explore a different scoring mechanism for intervals, based purely 
> on sloppy frequency and ignoring term weighting.  This should make the scores 
> easier to reason about, as well as making them useful for things like 
> proximity boosting on boolean queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8634) LatLonShape: Query with the same polygon that is indexed might not match

2019-01-11 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8634:
-
Attachment: (was: LUCENE-8634.patch)

> LatLonShape: Query with the same polygon that is indexed might not match
> 
>
> Key: LUCENE-8634
> URL: https://issues.apache.org/jira/browse/LUCENE-8634
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 8.0, 7.7, master (9.0)
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8634.patch
>
>
> If a polygon with a degenerated dimension is indexed and then an intersect 
> query is performed with the same polygon, it might result in an empty result. 
> For example this polygon with degenerated longitude:
> POLYGON((1.401298464324817E-45 22.0, 1.401298464324817E-45 69.0, 
> 4.8202184588118395E-40 69.0, 4.8202184588118395E-40 22.0, 
> 1.401298464324817E-45 22.0))
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8634) LatLonShape: Query with the same polygon that is indexed might not match

2019-01-11 Thread Ignacio Vera (JIRA)
Ignacio Vera created LUCENE-8634:


 Summary: LatLonShape: Query with the same polygon that is indexed 
might not match
 Key: LUCENE-8634
 URL: https://issues.apache.org/jira/browse/LUCENE-8634
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/sandbox
Affects Versions: 8.0, 7.7, master (9.0)
Reporter: Ignacio Vera


If a polygon with a degenerated dimension is indexed and then an intersect 
query is performed with the same polygon, it might result in an empty result. 
For example this polygon with degenerated longitude:

POLYGON((1.401298464324817E-45 22.0, 1.401298464324817E-45 69.0, 
4.8202184588118395E-40 69.0, 4.8202184588118395E-40 22.0, 1.401298464324817E-45 
22.0))

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 429 - Still unstable

2019-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/429/

4 tests failed.
FAILED:  org.apache.lucene.search.TestPointQueries.testRandomBinaryBig

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([81DB11C283A04F59]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestPointQueries

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([81DB11C283A04F59]:0)


FAILED:  org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR

Error Message:
Path must not end with / character

Stack Trace:
java.lang.IllegalArgumentException: Path must not end with / character
at 
__randomizedtesting.SeedInfo.seed([F872EFFD0FFD80BC:A2EAD53B717DE75B]:0)
at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:58)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1523)
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$getChildren$4(SolrZkClient.java:346)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:71)
at 
org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:346)
at 
org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR(LIROnShardRestartTest.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)

[jira] [Commented] (LUCENE-8633) Remove term weighting from interval scoring

2019-01-11 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740265#comment-16740265
 ] 

Alan Woodward commented on LUCENE-8633:
---

Attached is a patch with an alternative scoring system:
* Sloppy frequency is calculated as the sum of individual interval scores.  
Each interval is scored as 1/(length - minExtent + 1), where minExtent() is a 
new method on IntervalsSource that exposes the minimum possible length of an 
interval produced by that source.  This is based on the scoring mechanism 
described in Vigna's paper describing intervals[1]
* In order to keep the score bounded so that it can be used as a proximity 
boost without wrecking max-score optimizations, the sloppy frequency is 
converted to a score using a saturation function.  I've chosen 5 as a pivot 
here more-or-less at random (meaning that documents containing 5 intervals of 
minimum possible length will get a score of boost * 0.5) - better ways of 
choosing a pivot are welcome.

[1] 
http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf

> Remove term weighting from interval scoring
> ---
>
> Key: LUCENE-8633
> URL: https://issues.apache.org/jira/browse/LUCENE-8633
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8633.patch
>
>
> IntervalScorer currently uses the same scoring mechanism as SpanScorer, 
> summing the IDF of all possibly matching terms from its parent 
> IntervalsSource and using that in conjunction with a sloppy frequency to 
> produce a similarity-based score.  This doesn't really make sense, however, 
> as it means that terms that don't appear in a document can still contribute 
> to the score, and appears to make scores from interval queries comparable 
> with scores from term or phrase queries when they really aren't.
> I'd like to explore a different scoring mechanism for intervals, based purely 
> on sloppy frequency and ignoring term weighting.  This should make the scores 
> easier to reason about, as well as making them useful for things like 
> proximity boosting on boolean queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8633) Remove term weighting from interval scoring

2019-01-11 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-8633:
-

 Summary: Remove term weighting from interval scoring
 Key: LUCENE-8633
 URL: https://issues.apache.org/jira/browse/LUCENE-8633
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
 Attachments: LUCENE-8633.patch

IntervalScorer currently uses the same scoring mechanism as SpanScorer, summing 
the IDF of all possibly matching terms from its parent IntervalsSource and 
using that in conjunction with a sloppy frequency to produce a similarity-based 
score.  This doesn't really make sense, however, as it means that terms that 
don't appear in a document can still contribute to the score, and appears to 
make scores from interval queries comparable with scores from term or phrase 
queries when they really aren't.

I'd like to explore a different scoring mechanism for intervals, based purely 
on sloppy frequency and ignoring term weighting.  This should make the scores 
easier to reason about, as well as making them useful for things like proximity 
boosting on boolean queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8633) Remove term weighting from interval scoring

2019-01-11 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8633:
--
Attachment: LUCENE-8633.patch

> Remove term weighting from interval scoring
> ---
>
> Key: LUCENE-8633
> URL: https://issues.apache.org/jira/browse/LUCENE-8633
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8633.patch
>
>
> IntervalScorer currently uses the same scoring mechanism as SpanScorer, 
> summing the IDF of all possibly matching terms from its parent 
> IntervalsSource and using that in conjunction with a sloppy frequency to 
> produce a similarity-based score.  This doesn't really make sense, however, 
> as it means that terms that don't appear in a document can still contribute 
> to the score, and appears to make scores from interval queries comparable 
> with scores from term or phrase queries when they really aren't.
> I'd like to explore a different scoring mechanism for intervals, based purely 
> on sloppy frequency and ignoring term weighting.  This should make the scores 
> easier to reason about, as well as making them useful for things like 
> proximity boosting on boolean queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8525) throw more specific exception on data corruption

2019-01-11 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740186#comment-16740186
 ] 

Simon Willnauer commented on LUCENE-8525:
-

I do agree with [~rcmuir] here. There is not much to do in terms of detecting 
this particular problem on DataInput and friends. One way to improve this would 
certainly be the wording on the java doc. We can just clarify that detecting 
_CorruptIndexException_ is best effort. 
Another idea is to checksum the entire file before we read the commit we can 
either do this on the Elasticsearch end or improve _SegmentInfos#readCommit_ . 
Reading this file twice isn't a big deal I guess.

> throw more specific exception on data corruption
> 
>
> Key: LUCENE-8525
> URL: https://issues.apache.org/jira/browse/LUCENE-8525
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Vladimir Dolzhenko
>Priority: Major
>
> DataInput throws generic IOException if data looks odd
> [DataInput:141|https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302214165a4d/lucene/core/src/java/org/apache/lucene/store/DataInput.java#L141]
> there are other examples like 
> [BufferedIndexInput:219|https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302214165a4d/lucene/core/src/java/org/apache/lucene/store/BufferedIndexInput.java#L219],
>  
> [CompressionMode:226|https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302214165a4d/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressionMode.java#L226]
>  and maybe 
> [DocIdsWriter:81|https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302214165a4d/lucene/core/src/java/org/apache/lucene/util/bkd/DocIdsWriter.java#L81]
> That leads to some difficulties - see [elasticsearch 
> #34322|https://github.com/elastic/elasticsearch/issues/34322]
> It would be better if it throws more specific exception.
> As a consequence 
> [SegmentInfos.readCommit|https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302214165a4d/lucene/core/src/java/org/apache/lucene/index/SegmentInfos.java#L281]
>  violates its own contract
> {code:java}
> /**
>* @throws CorruptIndexException if the index is corrupt
>* @throws IOException if there is a low-level IO error
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org