[GitHub] lucene-solr pull request #487: LUCENE-8557: LeafReader.getFieldInfos should ...

2018-11-02 Thread tpunder
GitHub user tpunder opened a pull request:

https://github.com/apache/lucene-solr/pull/487

LUCENE-8557: LeafReader.getFieldInfos should always return the same instance



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tpunder/lucene-solr LUCENE-8557

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/487.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #487


commit 2bc1a3ee05718ea38cf2fc907205426d71ee5858
Author: Tim Underwood 
Date:   2018-10-17T15:33:16Z

LUCENE-8557: LeafReader.getFieldInfos should always return the same instance




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8557) LeafReader.getFieldInfos should always return the same instance

2018-11-02 Thread Tim Underwood (JIRA)
Tim Underwood created LUCENE-8557:
-

 Summary: LeafReader.getFieldInfos should always return the same 
instance
 Key: LUCENE-8557
 URL: https://issues.apache.org/jira/browse/LUCENE-8557
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 7.5
Reporter: Tim Underwood


Most implementations of the LeafReader cache an instance of FieldInfos which is 
returned in the LeafReader.getFieldInfos() method.  There are a few places that 
currently do not and this can cause performance problems.

The most notable example is the lack of caching in Solr's 
SlowCompositeReaderWrapper which caused unexpected performance slowdowns when 
trying to use Solr's JSON Facets compared to the legacy facets.

This proposed change is mostly relevant to Solr but touches a few Lucene 
classes.  Specifically:

*1.* Adds a check to TestUtil.checkReader to verify that 
LeafReader.getFieldInfos() returns the same instance:

 
{code:java}
// FieldInfos should be cached at the reader and always return the same instance
 if (reader.getFieldInfos() != reader.getFieldInfos()) {
 throw new RuntimeException("getFieldInfos() returned different instances for 
class: "+reader.getClass());
 }
{code}
I'm not entirely sure this is wanted or needed but adding it uncovered most of 
the other LeafReader implementations that were not caching FieldInfos.  I'm 
happy to remove this part of the patch though.

 

*2.* Adds a FieldInfos.EMPTY that can be used in a handful of places

 
{code:java}
public final static FieldInfos EMPTY = new FieldInfos(new FieldInfo[0]);
{code}
There are several places in the Lucene/Solr tests that were creating empty 
instances of FieldInfos which were causing the check in #1 to fail.  This fixes 
those failures and cleans up the code a bit.

*3.* Fixes a few LeafReader implementations that were not caching FieldInfos

Specifically:
 * *MemoryIndex.MemoryIndexReader* - The constructor was already looping over 
the fields so it seemed natural to just create the FieldInfos at that time
 * *SlowCompositeReaderWrapper* - This was the one causing me trouble.  I've 
moved the caching of FieldInfos from SolrIndexSearcher to 
SlowCompositeReaderWrapper.
 * *CollapsingQParserPlugin.ReaderWrapper* - getFieldInfos() is immediately 
called twice after this is constructed
 * *ExpandComponent.ReaderWrapper* - getFieldInfos() is immediately called 
twice after this is constructed

 

*4.* Minor Solr tweak to avoid calling SolrIndexSearcher.getSlowAtomicReader in 
FacetFieldProcessorByHashDV.  This change is now optional since 
SlowCompositeReaderWrapper caches FieldInfos.

 

As suggested by [~dsmiley] this takes the place of SOLR-12878 since it touches 
some Lucene code.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 23142 - Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23142/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:46813/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:46813/solr
at 
__randomizedtesting.SeedInfo.seed([DFB3AE3EA9DD7DC1:1E43D792848DB766]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:45147/solr

Stack Trace:

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-11) - Build # 868 - Still Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/868/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseSerialGC

14 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth

Error Message:
Could not find collection:second_collection

Stack Trace:
java.lang.AssertionError: Could not find collection:second_collection
at 
__randomizedtesting.SeedInfo.seed([EEE6DF4DD418A594:125C0B792C38145E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:157)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth(TestMiniSolrCloudClusterSSL.java:121)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Updated] (SOLR-12954) facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards

2018-11-02 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-12954:

Description: 
While testing out SOLR-7804 i discovered a failure in TestCloudPivotFacet that 
indicates a problem with the refinement of (nested?) pivot facets when 
{{facet.sort=index}} and {{facet.pivot.mincount > 1}}


  was:
While testing out SOLR-7804 i discovered a failure in TestCloudPivotFacet that 
indicates a problem with the refinement of (nested?) pivot facets when 
{{facet.sort=index}} and {{facet.pivot.mincount > 2*numShards}}



> facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards
> ---
>
> Key: SOLR-12954
> URL: https://issues.apache.org/jira/browse/SOLR-12954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> While testing out SOLR-7804 i discovered a failure in TestCloudPivotFacet 
> that indicates a problem with the refinement of (nested?) pivot facets when 
> {{facet.sort=index}} and {{facet.pivot.mincount > 1}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12954) facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards

2018-11-02 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-12954:
---

Assignee: Hoss Man

> facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards
> ---
>
> Key: SOLR-12954
> URL: https://issues.apache.org/jira/browse/SOLR-12954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> While testing out SOLR-7804 i discovered a failure in TestCloudPivotFacet 
> that indicates a problem with the refinement of (nested?) pivot facets when 
> {{facet.sort=index}} and {{facet.pivot.mincount > 2*numShards}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12954) facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards

2018-11-02 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673919#comment-16673919
 ] 

Hoss Man commented on SOLR-12954:
-

There were actually 2 diff bugs relating to facet.sort=index and 
facet.mincount>1 ... depending on whether facet.limit=-1 or not.  Both are now 
fixed.

> facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards
> ---
>
> Key: SOLR-12954
> URL: https://issues.apache.org/jira/browse/SOLR-12954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> While testing out SOLR-7804 i discovered a failure in TestCloudPivotFacet 
> that indicates a problem with the refinement of (nested?) pivot facets when 
> {{facet.sort=index}} and {{facet.pivot.mincount > 2*numShards}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12954) facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards

2018-11-02 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-12954.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.6

> facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards
> ---
>
> Key: SOLR-12954
> URL: https://issues.apache.org/jira/browse/SOLR-12954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> While testing out SOLR-7804 i discovered a failure in TestCloudPivotFacet 
> that indicates a problem with the refinement of (nested?) pivot facets when 
> {{facet.sort=index}} and {{facet.pivot.mincount > 2*numShards}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-11-02 Thread Anshum Gupta (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673744#comment-16673744
 ] 

Anshum Gupta commented on SOLR-12801:
-

{quote}
  [junit4] Tests with failures [seed: 9F83A474D27F5826]:
  [junit4]   - org.apache.solr.cloud.ZkSolrClientTest.testMakeRootNode
  [junit4]   - org.apache.solr.cloud.ZkSolrClientTest (suite)
{quote}

> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8558) Adding NumericDocValuesFields is slowing down the indexing process significantly

2018-11-02 Thread Kranthi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673879#comment-16673879
 ] 

Kranthi commented on LUCENE-8558:
-

[~dnhatn] I'm preparing the patch. Will submit it soon

> Adding NumericDocValuesFields is slowing down the indexing process 
> significantly
> 
>
> Key: LUCENE-8558
> URL: https://issues.apache.org/jira/browse/LUCENE-8558
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 7.4, 7.5
>Reporter: Kranthi
>Priority: Major
>  Labels: patch, performance
> Fix For: 7.4, 7.5
>
>
> The indexing time for my ~2M documents has gone up significantly when I 
> started adding fields of type NumericDocValuesField.
>  
> Upon debugging found the bottleneck to be in the 
> PerFieldMergeState#FilterFieldInfos constructor. The contains check in the 
> below code snippet was the culprit. 
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (filterFields.contains(fi.name)) {
> {code}
> A simple change as below seems to have fixed my issue
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (this.filteredNames.contains(fi.name)) {
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 200 - Still Unstable

2018-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/200/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
missing cleanup event: [CapturedEvent{timestamp=23547409088574798, 
stage=STARTED, actionName='null', event={   
"id":"53a83e69f960f4T4o7iqympx5gf90zda8sw2dhhr",   
"source":".scheduled_maintenance",   "eventTime":23547409086374132,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541210008148,  
   "_enqueue_time_":23547409087417446}}, context={}, config={   
"trigger":".scheduled_maintenance",   "stage":[ "STARTED", "ABORTED",   
  "SUCCEEDED", "FAILED"],   "beforeAction":"inactive_shard_plan",   
"afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23547409122946443, 
stage=BEFORE_ACTION, actionName='inactive_shard_plan', event={   
"id":"53a83e69f960f4T4o7iqympx5gf90zda8sw2dhhr",   
"source":".scheduled_maintenance",   "eventTime":23547409086374132,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541210008148,  
   "_enqueue_time_":23547409087417446}}, 
context={properties.BEFORE_ACTION=[inactive_shard_plan, execute_plan, test], 
source=.scheduled_maintenance}, config={   "trigger":".scheduled_maintenance",  
 "stage":[ "STARTED", "ABORTED", "SUCCEEDED", "FAILED"],   
"beforeAction":"inactive_shard_plan",   "afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23547409140538498, 
stage=AFTER_ACTION, actionName='inactive_shard_plan', event={   
"id":"53a83e69f960f4T4o7iqympx5gf90zda8sw2dhhr",   
"source":".scheduled_maintenance",   "eventTime":23547409086374132,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541210008148,  
   "_enqueue_time_":23547409087417446}}, 
context={properties.BEFORE_ACTION=[inactive_shard_plan, execute_plan, test], 
source=.scheduled_maintenance, 
properties.inactive_shard_plan={staleLocks={ScheduledMaintenanceTriggerTest_collection1/staleShard-splitting={stateTimestamp=1541037208088794419,
 currentTimeNs=1541210008202395533, deltaSec=172800, ttlSec=20}}}, 
properties.AFTER_ACTION=[inactive_shard_plan, execute_plan, test]}, config={   
"trigger":".scheduled_maintenance",   "stage":[ "STARTED", "ABORTED",   
  "SUCCEEDED", "FAILED"],   "beforeAction":"inactive_shard_plan",   
"afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23547409141817812, stage=SUCCEEDED, 
actionName='null', event={   "id":"53a83e69f960f4T4o7iqympx5gf90zda8sw2dhhr",   
"source":".scheduled_maintenance",   "eventTime":23547409086374132,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541210008148,  
   "_enqueue_time_":23547409087417446}}, context={}, config={   
"trigger":".scheduled_maintenance",   "stage":[ "STARTED", "ABORTED",   
  "SUCCEEDED", "FAILED"],   "beforeAction":"inactive_shard_plan",   
"afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23547414143038103, stage=STARTED, 
actionName='null', event={   "id":"53a83f9754d4b2T4o7iqympx5gf90zda8sw2dhhx",   
"source":".scheduled_maintenance",   "eventTime":23547414142309554,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541210013205,  
   "_enqueue_time_":23547414142507483}}, context={}, config={   
"trigger":".scheduled_maintenance",   "stage":[ "STARTED", "ABORTED",   
  "SUCCEEDED", "FAILED"],   "beforeAction":"inactive_shard_plan",   
"afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23547414143594742, 
stage=BEFORE_ACTION, actionName='inactive_shard_plan', event={   
"id":"53a83f9754d4b2T4o7iqympx5gf90zda8sw2dhhx",   
"source":".scheduled_maintenance",   "eventTime":23547414142309554,   
"eventType":"SCHEDULED",   "properties":{ "actualEventTime":1541210013205,  
   "_enqueue_time_":23547414142507483}}, 
context={properties.BEFORE_ACTION=[inactive_shard_plan, execute_plan, test], 
source=.scheduled_maintenance}, config={   "trigger":".scheduled_maintenance",  
 "stage":[ "STARTED", "ABORTED", "SUCCEEDED", "FAILED"],   
"beforeAction":"inactive_shard_plan",   "afterAction":"inactive_shard_plan",   
"class":"org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest$CapturingTriggerListener"},
 message='null'}, CapturedEvent{timestamp=23547414146346730, 
stage=AFTER_ACTION, 

[JENKINS] Lucene-Solr-repro - Build # 1842 - Unstable

2018-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1842/

[...truncated 40 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2923/consoleText

[repro] Revision: 0cbefe8b25044a0f565c8491bda86626f2eddf5e

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testRouting -Dtests.seed=B9AB022A421589A9 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=Asia/Seoul 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testVersionsAreReturned -Dtests.seed=B9AB022A421589A9 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=hr 
-Dtests.timezone=Asia/Seoul -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
31d7dfe6b1b283e4678d1abd82af9eac680afe45
[repro] git fetch
[repro] git checkout 0cbefe8b25044a0f565c8491bda86626f2eddf5e

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2703 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=B9AB022A421589A9 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=hr -Dtests.timezone=Asia/Seoul -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 2915 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro] git checkout 31d7dfe6b1b283e4678d1abd82af9eac680afe45

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 1843 - Still Unstable

2018-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1843/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/205/consoleText

[repro] Revision: f9598f335b751d095a3728ba55f50b6753456040

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testRouting -Dtests.seed=4514570E16EECC5E -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sr-Latn-RS 
-Dtests.timezone=America/Santa_Isabel -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
31d7dfe6b1b283e4678d1abd82af9eac680afe45
[repro] git fetch
[repro] git checkout f9598f335b751d095a3728ba55f50b6753456040

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2716 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=4514570E16EECC5E -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sr-Latn-RS 
-Dtests.timezone=America/Santa_Isabel -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 1114 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro] git checkout 31d7dfe6b1b283e4678d1abd82af9eac680afe45

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-02 Thread Tommaso Teofili
Welcome Tim!!!

Tommaso
Il giorno ven 2 nov 2018 alle ore 22:30 Steve Rowe 
ha scritto:
>
> Welcome Tim!
>
> Steve
>
> On Fri, Nov 2, 2018 at 12:20 PM Erick Erickson  
> wrote:
>>
>> Hi all,
>>
>> Please join me in welcoming Tim Allison as the latest Lucene/Solr committer!
>>
>> Congratulations and Welcome, Tim!
>>
>> It's traditional for you to introduce yourself with a brief bio.
>>
>> Erick
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8558) Adding NumericDocValuesFields is slowing down the indexing process significantly

2018-11-02 Thread Kranthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kranthi updated LUCENE-8558:

Description: 
The indexing time for my ~2M documents has gone up significantly when I started 
adding fields of type NumericDocValuesField.

 

Upon debugging found the bottleneck to be in the 
PerFieldMergeState#FilterFieldInfos constructor. The contains check in the 
below code snippet was the culprit. 
{code:java}
this.filteredNames = new HashSet<>(filterFields);
this.filtered = new ArrayList<>(filterFields.size());
for (FieldInfo fi : src) {
  if (filterFields.contains(fi.name)) {
{code}
A simple change as below seems to have fixed my issue
{code:java}
this.filteredNames = new HashSet<>(filterFields);
this.filtered = new ArrayList<>(filterFields.size());
for (FieldInfo fi : src) {
  if (this.filteredNames.contains(fi.name)) {
{code}
 

  was:
The indexing time for my ~2M documents has gone up significantly when I started 
adding fields of type NumericDocValuesField.

 

Upon debugging found the bottleneck to be in the 
PerFieldMergeState#FilterFieldInfos constructor. The contains check in the 
below code snippet was the culprit. 
{code:java}
this.filteredNames = new HashSet<>(filterFields);
this.filtered = new ArrayList<>(filterFields.size());
for (FieldInfo fi : src) {
  if (filterFields.contains(fi.name)) {
{code}
A simple change to the following seems to have fixed my issue
{code:java}
this.filteredNames = new HashSet<>(filterFields);
this.filtered = new ArrayList<>(filterFields.size());
for (FieldInfo fi : src) {
  if (this.filteredNames.contains(fi.name)) {
{code}
 


> Adding NumericDocValuesFields is slowing down the indexing process 
> significantly
> 
>
> Key: LUCENE-8558
> URL: https://issues.apache.org/jira/browse/LUCENE-8558
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 7.4, 7.5
>Reporter: Kranthi
>Priority: Major
>  Labels: patch, performance
> Fix For: 7.4, 7.5
>
>
> The indexing time for my ~2M documents has gone up significantly when I 
> started adding fields of type NumericDocValuesField.
>  
> Upon debugging found the bottleneck to be in the 
> PerFieldMergeState#FilterFieldInfos constructor. The contains check in the 
> below code snippet was the culprit. 
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (filterFields.contains(fi.name)) {
> {code}
> A simple change as below seems to have fixed my issue
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (this.filteredNames.contains(fi.name)) {
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8558) Adding NumericDocValuesFields is slowing down the indexing process significantly

2018-11-02 Thread Nhat Nguyen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673866#comment-16673866
 ] 

Nhat Nguyen commented on LUCENE-8558:
-

[~Chalasani] Good find. Would you like to submit a patch for this?

> Adding NumericDocValuesFields is slowing down the indexing process 
> significantly
> 
>
> Key: LUCENE-8558
> URL: https://issues.apache.org/jira/browse/LUCENE-8558
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 7.4, 7.5
>Reporter: Kranthi
>Priority: Major
>  Labels: patch, performance
> Fix For: 7.4, 7.5
>
>
> The indexing time for my ~2M documents has gone up significantly when I 
> started adding fields of type NumericDocValuesField.
>  
> Upon debugging found the bottleneck to be in the 
> PerFieldMergeState#FilterFieldInfos constructor. The contains check in the 
> below code snippet was the culprit. 
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (filterFields.contains(fi.name)) {
> {code}
> A simple change as below seems to have fixed my issue
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (this.filteredNames.contains(fi.name)) {
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11) - Build # 7602 - Still Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7602/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseParallelGC

14 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:55742/solr/second_collection, 
https://127.0.0.1:55591/solr/second_collection]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:55742/solr/second_collection, 
https://127.0.0.1:55591/solr/second_collection]
at 
__randomizedtesting.SeedInfo.seed([DCDF875759149ABB:8B6EC2EC99E865AA]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:983)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:264)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName(TestMiniSolrCloudClusterSSL.java:185)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

Re: lucene-solr:jira/gradle: Parallel running tests

2018-11-02 Thread Gus Heck
I generally like git, but It's not clear to me exactly how github is
integrated/mirrored with the Apache git infrastructure, and if the flow is
bidirectional or not... or whether or not listing my account in the Apache
profile is sufficient to allow me additional privs there or not. (perhaps
the answer is here
https://www.apache.org/dev/new-committers-guide.html#checkout-the-committers-only-subversion-module
, but that bit of karma seems to have not yet reached me). So for the
time being I'll be using good old patches. The hint about solr.*  prefixes
is good to know.

-Gus

On Fri, Nov 2, 2018 at 12:28 PM Erick Erickson 
wrote:

> It's not necessary to make a patch, especially if the change doesn't
> need much collaboration. It's perfectly acceptable to make a patch and
> attach it to the JIRA like the old days. Whichever you're most
> comfortable with.
>
> You've probably inferred that I'm one of the folks that had to be
> dragged kicking and screaming into the modern Git days. ;)
>
> Erick
> On Fri, Nov 2, 2018 at 8:43 AM David Smiley 
> wrote:
> >
> > There's no real standard; just people doing what they like and observing
> what others do.
> >
> > Note that commits to branches following the pattern (lucene|solr).*
> (i.e. that which start with "lucene" or "solr") will *not* get an automated
> comment on corresponding JIRA issues.  All others continue to.  ASF infra
> got this done for us: https://issues.apache.org/jira/browse/INFRA-11198
> >
> > I recommend you start a branch with "solr" or "SOLR" if you are going to
> work on a Solr issue.  This way if you merge in changes from master, you
> won't spam the related issues with comments.
> >
> > ~ David
> >
> >
> > On Fri, Nov 2, 2018 at 7:46 AM Gus Heck  wrote:
> >>
> >> I'm curious about the branch naming here. I notice this is jira/ and
> there are several other such heads in the repository. What's the convention
> or significance here for this jira/ prefix?
> >>
> >> On Fri, Nov 2, 2018 at 6:12 AM  wrote:
> >>>
> >>> Repository: lucene-solr
> >>> Updated Branches:
> >>>   refs/heads/jira/gradle c9cb4fe96 -> 4a12fffb7
> >>>
> >>>
> >>> Parallel running tests
> >>>
> >>>
> >>> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> >>> Commit:
> http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/4a12fffb
> >>> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/4a12fffb
> >>> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/4a12fffb
> >>>
> >>> Branch: refs/heads/jira/gradle
> >>> Commit: 4a12fffb751078c2dfdf427617dd5ed9c52c7378
> >>> Parents: c9cb4fe
> >>> Author: Cao Manh Dat 
> >>> Authored: Fri Nov 2 10:11:47 2018 +
> >>> Committer: Cao Manh Dat 
> >>> Committed: Fri Nov 2 10:11:47 2018 +
> >>>
> >>> --
> >>>  build.gradle | 6 +-
> >>>  1 file changed, 5 insertions(+), 1 deletion(-)
> >>> --
> >>>
> >>>
> >>>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4a12fffb/build.gradle
> >>> --
> >>> diff --git a/build.gradle b/build.gradle
> >>> index df21ce8..27a351d 100644
> >>> --- a/build.gradle
> >>> +++ b/build.gradle
> >>> @@ -30,6 +30,10 @@ subprojects {
> >>> systemProperty 'java.security.egd',
> 'file:/dev/./urandom'
> >>> }
> >>> }
> >>> +   tasks.withType(Test) {
> >>> +   maxParallelForks =
> Runtime.runtime.availableProcessors() / 2
> >>> +   }
> >>> +
> >>>  }
> >>>
> >>>  // These versions are defined here because they represent
> >>> @@ -308,4 +312,4 @@ ext.library = [
> >>> xz: "org.tukaani:xz:1.8",
> >>> morfologik_ukrainian_search:
> "ua.net.nlp:morfologik-ukrainian-search:3.9.0",
> >>> xercesImpl: "xerces:xercesImpl:2.9.1"
> >>> -]
> >>> \ No newline at end of file
> >>> +]
> >>>
> >>
> >>
> >> --
> >> http://www.the111shift.com
> >
> > --
> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-- 
http://www.the111shift.com


[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 23143 - Still Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23143/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/39)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10001_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/39)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10001_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([B61752C31E21C440:363737ED0F622CE6]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
  

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 3024 - Still Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3024/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseSerialGC

44 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName

Error Message:
Could not find collection:second_collection

Stack Trace:
java.lang.AssertionError: Could not find collection:second_collection
at 
__randomizedtesting.SeedInfo.seed([4B65EC9308969544:1CD4A928C86A6A55]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName(TestMiniSolrCloudClusterSSL.java:185)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Created] (LUCENE-8558) Adding NumericDocValuesFields is slowing down the indexing process significantly

2018-11-02 Thread Kranthi (JIRA)
Kranthi created LUCENE-8558:
---

 Summary: Adding NumericDocValuesFields is slowing down the 
indexing process significantly
 Key: LUCENE-8558
 URL: https://issues.apache.org/jira/browse/LUCENE-8558
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 7.5, 7.4
Reporter: Kranthi
 Fix For: 7.5, 7.4


The indexing time for my ~2M documents has gone up significantly when I started 
adding fields of type NumericDocValuesField.

 

Upon debugging found the bottleneck to be in the 
PerFieldMergeState#FilterFieldInfos constructor. The contains check in the 
below code snippet was the culprit. 
{code:java}
this.filteredNames = new HashSet<>(filterFields);
this.filtered = new ArrayList<>(filterFields.size());
for (FieldInfo fi : src) {
  if (filterFields.contains(fi.name)) {
{code}
A simple change to the following seems to have fixed my issue
{code:java}
this.filteredNames = new HashSet<>(filterFields);
this.filtered = new ArrayList<>(filterFields.size());
for (FieldInfo fi : src) {
  if (this.filteredNames.contains(fi.name)) {
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 3025 - Still Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3025/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

32 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingSorting

Error Message:
Documents in wrong order for field: intGSF expected:<[4]> but was:<[2]>

Stack Trace:
org.junit.ComparisonFailure: Documents in wrong order for field: intGSF 
expected:<[4]> but was:<[2]>
at 
__randomizedtesting.SeedInfo.seed([DD51025C22285A15:C3690A545E83E095]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.checkSortOrder(DocValuesNotIndexedTest.java:266)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingSorting(DocValuesNotIndexedTest.java:246)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingSorting

Error Message:
Documents in wrong order for field: intGSF expected:<[4]> but 

[jira] [Commented] (SOLR-12954) facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673916#comment-16673916
 ] 

ASF subversion and git services commented on SOLR-12954:


Commit be8f611db1cbaf51622d8af5cd6efced4e338968 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be8f611 ]

SOLR-12954: fix facet.pivot refinement bugs when using facet.sort=index and 
facet.mincount>1

(cherry picked from commit c5ff4afe95001bc1baf29f64fb2406fd2ca3)

Conflicts:
solr/CHANGES.txt


> facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards
> ---
>
> Key: SOLR-12954
> URL: https://issues.apache.org/jira/browse/SOLR-12954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> While testing out SOLR-7804 i discovered a failure in TestCloudPivotFacet 
> that indicates a problem with the refinement of (nested?) pivot facets when 
> {{facet.sort=index}} and {{facet.pivot.mincount > 2*numShards}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12954) facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673917#comment-16673917
 ] 

ASF subversion and git services commented on SOLR-12954:


Commit c5ff4afe95001bc1baf29f64fb2406fd2ca3 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c5ff4a4 ]

SOLR-12954: fix facet.pivot refinement bugs when using facet.sort=index and 
facet.mincount>1


> facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards
> ---
>
> Key: SOLR-12954
> URL: https://issues.apache.org/jira/browse/SOLR-12954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> While testing out SOLR-7804 i discovered a failure in TestCloudPivotFacet 
> that indicates a problem with the refinement of (nested?) pivot facets when 
> {{facet.sort=index}} and {{facet.pivot.mincount > 2*numShards}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-02 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673754#comment-16673754
 ] 

Tim Underwood commented on SOLR-12878:
--

Done: LUCENE-8557

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-02 Thread Nhat Nguyen
Welcome Tim!

On Fri, Nov 2, 2018 at 6:33 PM Tommaso Teofili 
wrote:

> Welcome Tim!!!
>
> Tommaso
> Il giorno ven 2 nov 2018 alle ore 22:30 Steve Rowe 
> ha scritto:
> >
> > Welcome Tim!
> >
> > Steve
> >
> > On Fri, Nov 2, 2018 at 12:20 PM Erick Erickson 
> wrote:
> >>
> >> Hi all,
> >>
> >> Please join me in welcoming Tim Allison as the latest Lucene/Solr
> committer!
> >>
> >> Congratulations and Welcome, Tim!
> >>
> >> It's traditional for you to introduce yourself with a brief bio.
> >>
> >> Erick
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 891 - Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/891/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at http://127.0.0.1:58810/solr/collection1_shard2_replica_n3: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update  HTTP ERROR 
404 Problem accessing /solr/collection1_shard2_replica_n3/update. 
Reason: Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:58810/solr/collection1_shard2_replica_n3: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n3/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([7CD241A51860C666:BE657DCD1B20361E]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1171 - Failure

2018-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1171/

No tests ran.

Build Log:
[...truncated 23410 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2436 links (1988 relative) to 3199 anchors in 248 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[jira] [Commented] (LUCENE-8550) Tessellator fails when filtering coplanar points when creating linked list

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672668#comment-16672668
 ] 

ASF subversion and git services commented on LUCENE-8550:
-

Commit bbb9f726e0985b725429a7e05eb480dd98df02db in lucene-solr's branch 
refs/heads/master from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bbb9f72 ]

LUCENE-8550: remove double entry for bug fixes in CHANGES.txt


> Tessellator fails when filtering coplanar points when creating linked list 
> ---
>
> Key: LUCENE-8550
> URL: https://issues.apache.org/jira/browse/LUCENE-8550
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8550.patch
>
>
> Currently when creating the linked list on the tessellator, coplanar points 
> are filtered. The problem is the following: 
> if we have three coplanar points, the code is actually removing the last 
> point, instead it should remove the middle one.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9952) S3BackupRepository

2018-11-02 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672672#comment-16672672
 ] 

Mikhail Khludnev commented on SOLR-9952:


[~michael-newsrx], storage gateway imposes cost of running ec2. Glacier is 
rather not typical for this (but it might), quite often it's used as data 
transfer,  but not literally like backup.

> S3BackupRepository
> --
>
> Key: SOLR-9952
> URL: https://issues.apache.org/jira/browse/SOLR-9952
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: 
> 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, 
> 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr 
> on S3.pdf, core-site.xml.template
>
>
> I'd like to have a backup repository implementation allows to snapshot to AWS 
> S3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8549) Tessellator should throw an error if all points were not processed

2018-11-02 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera reassigned LUCENE-8549:


Assignee: Ignacio Vera

> Tessellator should throw an error if all points were not processed
> --
>
> Key: LUCENE-8549
> URL: https://issues.apache.org/jira/browse/LUCENE-8549
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8549.patch
>
>
> Currently, the tessellation in some situations when it has not successfully 
> process all points in the polygon, it will still return an incomplete/wrong 
> tessellation. 
> For example the following code:
> {code:java}
> public void testInvalidPolygon()  throws Exception {
>   String wkt = "POLYGON((0 0, 1 1, 0 1, 1 0, 0 0))";
>   Polygon polygon = (Polygon)SimpleWKTShapeParser.parse(wkt);
>   expectThrows( IllegalArgumentException.class, () -> 
> {Tessellator.tessellate(polygon); });
> }{code}
> will fail as the tessellator return a wrong tessellation containing one 
> triangle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8534) Another case of Polygon tessellator going into an infinite loop

2018-11-02 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera reassigned LUCENE-8534:


Assignee: Ignacio Vera

> Another case of Polygon tessellator going into an infinite loop
> ---
>
> Key: LUCENE-8534
> URL: https://issues.apache.org/jira/browse/LUCENE-8534
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8534.patch, LUCENE-8534.patch, LUCENE-8534.patch, 
> bigPolygon.wkt, image-2018-10-19-12-25-07-849.png
>
>
> Related to LUCENE-8454, another case where tesselator never returns when 
> processing a polygon.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8550) Tessellator fails when filtering coplanar points when creating linked list

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672669#comment-16672669
 ] 

ASF subversion and git services commented on LUCENE-8550:
-

Commit 1d447b75c4e22694fac34b104706533bf56c6689 in lucene-solr's branch 
refs/heads/branch_7x from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1d447b7 ]

LUCENE-8550: remove double entry for bug fixes in CHANGES.txt


> Tessellator fails when filtering coplanar points when creating linked list 
> ---
>
> Key: LUCENE-8550
> URL: https://issues.apache.org/jira/browse/LUCENE-8550
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8550.patch
>
>
> Currently when creating the linked list on the tessellator, coplanar points 
> are filtered. The problem is the following: 
> if we have three coplanar points, the code is actually removing the last 
> point, instead it should remove the middle one.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8550) Tessellator fails when filtering coplanar points when creating linked list

2018-11-02 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera resolved LUCENE-8550.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.6

> Tessellator fails when filtering coplanar points when creating linked list 
> ---
>
> Key: LUCENE-8550
> URL: https://issues.apache.org/jira/browse/LUCENE-8550
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8550.patch
>
>
> Currently when creating the linked list on the tessellator, coplanar points 
> are filtered. The problem is the following: 
> if we have three coplanar points, the code is actually removing the last 
> point, instead it should remove the middle one.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-02 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-12878:
---

Assignee: (was: Mikhail Khludnev)

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8534) Another case of Polygon tessellator going into an infinite loop

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672639#comment-16672639
 ] 

ASF subversion and git services commented on LUCENE-8534:
-

Commit 6ae9aa2a320420537f85908a899dbb995f7802e4 in lucene-solr's branch 
refs/heads/master from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6ae9aa2 ]

LUCENE-8534: Fix incorrect computation for triangles intersecting polygon edges 
in shape tessellation


> Another case of Polygon tessellator going into an infinite loop
> ---
>
> Key: LUCENE-8534
> URL: https://issues.apache.org/jira/browse/LUCENE-8534
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8534.patch, LUCENE-8534.patch, LUCENE-8534.patch, 
> bigPolygon.wkt, image-2018-10-19-12-25-07-849.png
>
>
> Related to LUCENE-8454, another case where tesselator never returns when 
> processing a polygon.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8534) Another case of Polygon tessellator going into an infinite loop

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672640#comment-16672640
 ] 

ASF subversion and git services commented on LUCENE-8534:
-

Commit e0133e93af0dfc13d31903f0a82f33573e0e438f in lucene-solr's branch 
refs/heads/branch_7x from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e0133e9 ]

LUCENE-8534: Fix incorrect computation for triangles intersecting polygon edges 
in shape tessellation


> Another case of Polygon tessellator going into an infinite loop
> ---
>
> Key: LUCENE-8534
> URL: https://issues.apache.org/jira/browse/LUCENE-8534
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8534.patch, LUCENE-8534.patch, LUCENE-8534.patch, 
> bigPolygon.wkt, image-2018-10-19-12-25-07-849.png
>
>
> Related to LUCENE-8454, another case where tesselator never returns when 
> processing a polygon.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8549) Tessellator should throw an error if all points were not processed

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672653#comment-16672653
 ] 

ASF subversion and git services commented on LUCENE-8549:
-

Commit 68fe3801ea6d5a1bfe8af5fd0646dc19fdd0f420 in lucene-solr's branch 
refs/heads/branch_7x from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=68fe380 ]

LUCENE-8549: Polygon tessellator throws an error if some parts of the shape 
could not be processed


> Tessellator should throw an error if all points were not processed
> --
>
> Key: LUCENE-8549
> URL: https://issues.apache.org/jira/browse/LUCENE-8549
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8549.patch
>
>
> Currently, the tessellation in some situations when it has not successfully 
> process all points in the polygon, it will still return an incomplete/wrong 
> tessellation. 
> For example the following code:
> {code:java}
> public void testInvalidPolygon()  throws Exception {
>   String wkt = "POLYGON((0 0, 1 1, 0 1, 1 0, 0 0))";
>   Polygon polygon = (Polygon)SimpleWKTShapeParser.parse(wkt);
>   expectThrows( IllegalArgumentException.class, () -> 
> {Tessellator.tessellate(polygon); });
> }{code}
> will fail as the tessellator return a wrong tessellation containing one 
> triangle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8549) Tessellator should throw an error if all points were not processed

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672651#comment-16672651
 ] 

ASF subversion and git services commented on LUCENE-8549:
-

Commit f7720aad82c6340558728c4fdc4dd716104f05f1 in lucene-solr's branch 
refs/heads/master from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f7720aa ]

LUCENE-8549: Polygon tessellator throws an error if some parts of the shape 
could not be processed


> Tessellator should throw an error if all points were not processed
> --
>
> Key: LUCENE-8549
> URL: https://issues.apache.org/jira/browse/LUCENE-8549
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8549.patch
>
>
> Currently, the tessellation in some situations when it has not successfully 
> process all points in the polygon, it will still return an incomplete/wrong 
> tessellation. 
> For example the following code:
> {code:java}
> public void testInvalidPolygon()  throws Exception {
>   String wkt = "POLYGON((0 0, 1 1, 0 1, 1 0, 0 0))";
>   Polygon polygon = (Polygon)SimpleWKTShapeParser.parse(wkt);
>   expectThrows( IllegalArgumentException.class, () -> 
> {Tessellator.tessellate(polygon); });
> }{code}
> will fail as the tessellator return a wrong tessellation containing one 
> triangle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8549) Tessellator should throw an error if all points were not processed

2018-11-02 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera resolved LUCENE-8549.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.6

> Tessellator should throw an error if all points were not processed
> --
>
> Key: LUCENE-8549
> URL: https://issues.apache.org/jira/browse/LUCENE-8549
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8549.patch
>
>
> Currently, the tessellation in some situations when it has not successfully 
> process all points in the polygon, it will still return an incomplete/wrong 
> tessellation. 
> For example the following code:
> {code:java}
> public void testInvalidPolygon()  throws Exception {
>   String wkt = "POLYGON((0 0, 1 1, 0 1, 1 0, 0 0))";
>   Polygon polygon = (Polygon)SimpleWKTShapeParser.parse(wkt);
>   expectThrows( IllegalArgumentException.class, () -> 
> {Tessellator.tessellate(polygon); });
> }{code}
> will fail as the tessellator return a wrong tessellation containing one 
> triangle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas

2018-11-02 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-12739.
--
Resolution: Fixed

> Make autoscaling policy based replica placement the default strategy for 
> placing replicas
> -
>
> Key: SOLR-12739
> URL: https://issues.apache.org/jira/browse/SOLR-12739
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, 
> SOLR-12739.patch, SOLR-12739.patch
>
>
> Today the default placement strategy is the same one used since Solr 4.x 
> which is to select nodes on a round robin fashion. I propose to make the 
> autoscaling policy based replica placement as the default policy for placing 
> replicas.
> This is related to SOLR-12648 where even though we have default cluster 
> preferences, we don't use them unless a policy is also configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8540) Geo3d quantization test failure for MAX/MIN encoding values

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672612#comment-16672612
 ] 

ASF subversion and git services commented on LUCENE-8540:
-

Commit 07b93a97f04ea6738962810d606ef16f0c42d1a8 in lucene-solr's branch 
refs/heads/master from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=07b93a9 ]

LUCENE-8540: Better handling of min/max values for Geo3d encoding


> Geo3d quantization test failure for MAX/MIN encoding values
> ---
>
> Key: LUCENE-8540
> URL: https://issues.apache.org/jira/browse/LUCENE-8540
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8540.patch
>
>
> Here is a reproducible error:
> {code:java}
> 08:45:21[junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
> 08:45:21[junit4] IGNOR/A 0.01s J1 | TestGeo3DPoint.testRandomBig
> 08:45:21[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 08:45:21[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testQuantization 
> -Dtests.seed=4CB20CF248F6211 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ga-IE -Dtests.timezone=America/Bogota -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> 08:45:21[junit4] ERROR   0.20s J1 | TestGeo3DPoint.testQuantization <<<
> 08:45:21[junit4]> Throwable #1: java.lang.IllegalArgumentException: 
> value=-1.0011188543037526 is out-of-bounds (less than than WGS84's 
> -planetMax=-1.0011188539924791)
> 08:45:21[junit4]> at 
> __randomizedtesting.SeedInfo.seed([4CB20CF248F6211:32220FD9326E7F33]:0)
> 08:45:21[junit4]> at 
> org.apache.lucene.spatial3d.Geo3DUtil.encodeValue(Geo3DUtil.java:56)
> 08:45:21[junit4]> at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testQuantization(TestGeo3DPoint.java:1228)
> 08:45:21[junit4]> at java.lang.Thread.run(Thread.java:748)
> 08:45:21[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
> docValues:{id=DocValuesFormat(name=Asserting), 
> point=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=659, 
> maxMBSortInHeap=6.225981846119071, sim=RandomSimilarity(queryNorm=false): {}, 
> locale=ga-IE, timezone=America/Bogota
> 08:45:21[junit4]   2> NOTE: Linux 2.6.32-754.6.3.el6.x86_64 amd64/Oracle 
> Corporation 1.8.0_181 
> (64-bit)/cpus=16,threads=1,free=466116320,total=536346624
> 08:45:21[junit4]   2> NOTE: All tests run in this JVM: [GeoPointTest, 
> RandomGeoPolygonTest, TestGeo3DPoint]
> 08:45:21[junit4] Completed [18/18 (1!)] on J1 in 19.83s, 14 tests, 1 
> error, 1 skipped <<< FAILURES!{code}
>  
> It seems this test will fail if encoding = Geo3DUtil.MIN_ENCODED_VALUE or 
> encoding = Geo3DUtil.MAX_ENCODED_VALUE.
> It is related with https://issues.apache.org/jira/browse/LUCENE-7327
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8540) Geo3d quantization test failure for MAX/MIN encoding values

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672613#comment-16672613
 ] 

ASF subversion and git services commented on LUCENE-8540:
-

Commit e3b2eb2db0657fc8636dc030ca28868d0836587b in lucene-solr's branch 
refs/heads/branch_7x from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e3b2eb2 ]

LUCENE-8540: Better handling of min/max values for Geo3d encoding


> Geo3d quantization test failure for MAX/MIN encoding values
> ---
>
> Key: LUCENE-8540
> URL: https://issues.apache.org/jira/browse/LUCENE-8540
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8540.patch
>
>
> Here is a reproducible error:
> {code:java}
> 08:45:21[junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
> 08:45:21[junit4] IGNOR/A 0.01s J1 | TestGeo3DPoint.testRandomBig
> 08:45:21[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 08:45:21[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testQuantization 
> -Dtests.seed=4CB20CF248F6211 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ga-IE -Dtests.timezone=America/Bogota -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> 08:45:21[junit4] ERROR   0.20s J1 | TestGeo3DPoint.testQuantization <<<
> 08:45:21[junit4]> Throwable #1: java.lang.IllegalArgumentException: 
> value=-1.0011188543037526 is out-of-bounds (less than than WGS84's 
> -planetMax=-1.0011188539924791)
> 08:45:21[junit4]> at 
> __randomizedtesting.SeedInfo.seed([4CB20CF248F6211:32220FD9326E7F33]:0)
> 08:45:21[junit4]> at 
> org.apache.lucene.spatial3d.Geo3DUtil.encodeValue(Geo3DUtil.java:56)
> 08:45:21[junit4]> at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testQuantization(TestGeo3DPoint.java:1228)
> 08:45:21[junit4]> at java.lang.Thread.run(Thread.java:748)
> 08:45:21[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
> docValues:{id=DocValuesFormat(name=Asserting), 
> point=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=659, 
> maxMBSortInHeap=6.225981846119071, sim=RandomSimilarity(queryNorm=false): {}, 
> locale=ga-IE, timezone=America/Bogota
> 08:45:21[junit4]   2> NOTE: Linux 2.6.32-754.6.3.el6.x86_64 amd64/Oracle 
> Corporation 1.8.0_181 
> (64-bit)/cpus=16,threads=1,free=466116320,total=536346624
> 08:45:21[junit4]   2> NOTE: All tests run in this JVM: [GeoPointTest, 
> RandomGeoPolygonTest, TestGeo3DPoint]
> 08:45:21[junit4] Completed [18/18 (1!)] on J1 in 19.83s, 14 tests, 1 
> error, 1 skipped <<< FAILURES!{code}
>  
> It seems this test will fail if encoding = Geo3DUtil.MIN_ENCODED_VALUE or 
> encoding = Geo3DUtil.MAX_ENCODED_VALUE.
> It is related with https://issues.apache.org/jira/browse/LUCENE-7327
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8540) Geo3d quantization test failure for MAX/MIN encoding values

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672614#comment-16672614
 ] 

ASF subversion and git services commented on LUCENE-8540:
-

Commit 8dd066ebdeac74f10475c9863801cedc3c5b8c8e in lucene-solr's branch 
refs/heads/branch_6x from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8dd066e ]

LUCENE-8540: Better handling of min/max values for Geo3d encoding


> Geo3d quantization test failure for MAX/MIN encoding values
> ---
>
> Key: LUCENE-8540
> URL: https://issues.apache.org/jira/browse/LUCENE-8540
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8540.patch
>
>
> Here is a reproducible error:
> {code:java}
> 08:45:21[junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
> 08:45:21[junit4] IGNOR/A 0.01s J1 | TestGeo3DPoint.testRandomBig
> 08:45:21[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 08:45:21[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testQuantization 
> -Dtests.seed=4CB20CF248F6211 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ga-IE -Dtests.timezone=America/Bogota -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> 08:45:21[junit4] ERROR   0.20s J1 | TestGeo3DPoint.testQuantization <<<
> 08:45:21[junit4]> Throwable #1: java.lang.IllegalArgumentException: 
> value=-1.0011188543037526 is out-of-bounds (less than than WGS84's 
> -planetMax=-1.0011188539924791)
> 08:45:21[junit4]> at 
> __randomizedtesting.SeedInfo.seed([4CB20CF248F6211:32220FD9326E7F33]:0)
> 08:45:21[junit4]> at 
> org.apache.lucene.spatial3d.Geo3DUtil.encodeValue(Geo3DUtil.java:56)
> 08:45:21[junit4]> at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testQuantization(TestGeo3DPoint.java:1228)
> 08:45:21[junit4]> at java.lang.Thread.run(Thread.java:748)
> 08:45:21[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
> docValues:{id=DocValuesFormat(name=Asserting), 
> point=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=659, 
> maxMBSortInHeap=6.225981846119071, sim=RandomSimilarity(queryNorm=false): {}, 
> locale=ga-IE, timezone=America/Bogota
> 08:45:21[junit4]   2> NOTE: Linux 2.6.32-754.6.3.el6.x86_64 amd64/Oracle 
> Corporation 1.8.0_181 
> (64-bit)/cpus=16,threads=1,free=466116320,total=536346624
> 08:45:21[junit4]   2> NOTE: All tests run in this JVM: [GeoPointTest, 
> RandomGeoPolygonTest, TestGeo3DPoint]
> 08:45:21[junit4] Completed [18/18 (1!)] on J1 in 19.83s, 14 tests, 1 
> error, 1 skipped <<< FAILURES!{code}
>  
> It seems this test will fail if encoding = Geo3DUtil.MIN_ENCODED_VALUE or 
> encoding = Geo3DUtil.MAX_ENCODED_VALUE.
> It is related with https://issues.apache.org/jira/browse/LUCENE-7327
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8550) Tessellator fails when filtering coplanar points when creating linked list

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672665#comment-16672665
 ] 

ASF subversion and git services commented on LUCENE-8550:
-

Commit aa2fc96ee7dbe78789961f9e205c3c516f3e08cf in lucene-solr's branch 
refs/heads/branch_7x from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aa2fc96 ]

LUCENE-8550: Fix filtering of coplanar points when creating linked list on 
polygon tesselator


> Tessellator fails when filtering coplanar points when creating linked list 
> ---
>
> Key: LUCENE-8550
> URL: https://issues.apache.org/jira/browse/LUCENE-8550
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8550.patch
>
>
> Currently when creating the linked list on the tessellator, coplanar points 
> are filtered. The problem is the following: 
> if we have three coplanar points, the code is actually removing the last 
> point, instead it should remove the middle one.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8550) Tessellator fails when filtering coplanar points when creating linked list

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672664#comment-16672664
 ] 

ASF subversion and git services commented on LUCENE-8550:
-

Commit ae9185f7d82d04a0bde6743dd6f8d009d0271bb7 in lucene-solr's branch 
refs/heads/master from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae9185f ]

LUCENE-8550: Fix filtering of coplanar points when creating linked list on 
polygon tesselator


> Tessellator fails when filtering coplanar points when creating linked list 
> ---
>
> Key: LUCENE-8550
> URL: https://issues.apache.org/jira/browse/LUCENE-8550
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8550.patch
>
>
> Currently when creating the linked list on the tessellator, coplanar points 
> are filtered. The problem is the following: 
> if we have three coplanar points, the code is actually removing the last 
> point, instead it should remove the middle one.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8556) Tesselator: Polygons may fail when using Morton optimisation

2018-11-02 Thread Ignacio Vera (JIRA)
Ignacio Vera created LUCENE-8556:


 Summary: Tesselator: Polygons may fail when using Morton 
optimisation
 Key: LUCENE-8556
 URL: https://issues.apache.org/jira/browse/LUCENE-8556
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/sandbox
Affects Versions: 7.6, master (8.0)
Reporter: Ignacio Vera
 Attachments: image-2018-11-02-08-48-12-898.png

I experience some errors when processing complex polygons. I realised that if I 
disable the Morton optimisation, then the errors go away.

I studied one of the cases and it seems that when using the optimisation, it is 
possible to create triangles with points inside of them (see picture attached). 
There is a point just on the edge of the triangle. When disabling the 
optimisation, such a triangle is not created.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:jira/gradle: Parallel running tests

2018-11-02 Thread Gus Heck
I'm curious about the branch naming here. I notice this is jira/ and there
are several other such heads in the repository. What's the convention or
significance here for this jira/ prefix?

On Fri, Nov 2, 2018 at 6:12 AM  wrote:

> Repository: lucene-solr
> Updated Branches:
>   refs/heads/jira/gradle c9cb4fe96 -> 4a12fffb7
>
>
> Parallel running tests
>
>
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/4a12fffb
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/4a12fffb
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/4a12fffb
>
> Branch: refs/heads/jira/gradle
> Commit: 4a12fffb751078c2dfdf427617dd5ed9c52c7378
> Parents: c9cb4fe
> Author: Cao Manh Dat 
> Authored: Fri Nov 2 10:11:47 2018 +
> Committer: Cao Manh Dat 
> Committed: Fri Nov 2 10:11:47 2018 +
>
> --
>  build.gradle | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4a12fffb/build.gradle
> --
> diff --git a/build.gradle b/build.gradle
> index df21ce8..27a351d 100644
> --- a/build.gradle
> +++ b/build.gradle
> @@ -30,6 +30,10 @@ subprojects {
> systemProperty 'java.security.egd',
> 'file:/dev/./urandom'
> }
> }
> +   tasks.withType(Test) {
> +   maxParallelForks = Runtime.runtime.availableProcessors() /
> 2
> +   }
> +
>  }
>
>  // These versions are defined here because they represent
> @@ -308,4 +312,4 @@ ext.library = [
> xz: "org.tukaani:xz:1.8",
> morfologik_ukrainian_search:
> "ua.net.nlp:morfologik-ukrainian-search:3.9.0",
> xercesImpl: "xerces:xercesImpl:2.9.1"
> -]
> \ No newline at end of file
> +]
>
>

-- 
http://www.the111shift.com


[JENKINS] Lucene-Solr-Tests-7.x - Build # 999 - Unstable

2018-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/999/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.NodeLostTriggerTest.testListenerAcceptance

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([3485018B91CD6E84:25315B6C28447A72]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.NodeLostTriggerTest.testListenerAcceptance(NodeLostTriggerTest.java:265)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.NodeLostTriggerTest.testTrigger

Error Message:
[127.0.0.1:39710_solr] doesn't contain 127.0.0.1:40288_solr

Stack 

[jira] [Updated] (SOLR-12845) Add a default cluster policy

2018-11-02 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-12845:
-
Attachment: SOLR-12845.patch

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12845.patch, SOLR-12845.patch
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12845) Add a default cluster policy

2018-11-02 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672964#comment-16672964
 ] 

Shalin Shekhar Mangar commented on SOLR-12845:
--

Updated patch that applies to latest master and fixes the failure in 
{{TestUtilizeNode}}

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12845.patch, SOLR-12845.patch
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Gus Heck as Lucene/Solr committer

2018-11-02 Thread Karl Wright
Welcome!!
Karl

On Thu, Nov 1, 2018 at 9:53 PM Koji Sekiguchi 
wrote:

> Welcome Gus!
>
> Koji
>
> On 2018/11/01 21:22, David Smiley wrote:
> > Hi all,
> >
> > Please join me in welcoming Gus Heck as the latest Lucene/Solr committer!
> >
> > Congratulations and Welcome, Gus!
> >
> > Gus, it's traditional for you to introduce yourself with a brief bio.
> >
> > ~ David
> > --
> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-8554) Add new LatLonShapeLineQuery

2018-11-02 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672829#comment-16672829
 ] 

Ignacio Vera commented on LUCENE-8554:
--

LGTM. I have a minor comment about java docs:

EdgeTree:  Javadocs refer to Polygon2D.

Line2D: Javadocs refer to Polygon2D.

LatLonShapeLineQuery: I think the line that refers to how shapes are indexed is 
inaccurate.

LatLonShape: Add java docs to functions that creates queries.

 

> Add new LatLonShapeLineQuery
> 
>
> Key: LUCENE-8554
> URL: https://issues.apache.org/jira/browse/LUCENE-8554
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: 7.6, master (8.0)
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8554.patch, LUCENE-8554.patch
>
>
> Its often useful to be able to query a shape index for documents that either 
> {{INTERSECT}} or are {{DISJOINT}} from a given {{LINESTRING}}. Occasionally 
> the linestring of interest may also have a distance component, which creates 
> a *buffered query* (often used in routing, or shape snapping). This feature 
> first adds a new {{LatLonShapeLineQuery}} for querying  {{LatLonShape}} 
> fields by arbitrary lines. A distance component can then be added in a future 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1686 - Still Unstable

2018-11-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1686/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testVersionsAreReturned

Error Message:
Error from server at 
https://127.0.0.1:46279/solr/collection1_shard2_replica_n3: Expected mime type 
application/octet-stream but got text/html.Error 404 
Can not find: /solr/collection1_shard2_replica_n3/update  
HTTP ERROR 404 Problem accessing 
/solr/collection1_shard2_replica_n3/update. Reason: Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:46279/solr/collection1_shard2_replica_n3: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n3/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([862947961D89B8FC:7EEFBEA38E902034]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:237)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testVersionsAreReturned(CloudSolrClientTest.java:725)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-8555) Add dateline crossing support to LatLonShapeBoundingBoxQuery

2018-11-02 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672839#comment-16672839
 ] 

Ignacio Vera commented on LUCENE-8555:
--

LGTM. A minor comment on java docs, the method LatLonPoint#newBoxQuery contains 
a todo in java docs that does not apply.

> Add dateline crossing support to LatLonShapeBoundingBoxQuery
> 
>
> Key: LUCENE-8555
> URL: https://issues.apache.org/jira/browse/LUCENE-8555
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: 7.6, master (8.0)
>Reporter: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8555.patch
>
>
> Instead of rewriting into a {{BooleanQuery}}, {{LatLonShapeBoundingBoxQuery}} 
> should handle dateline crossing support directly in the {{IntersectVisitor}}. 
> This feature issue will add support for splitting a 
> {{LatLonShapeBoundingBoxQuery}} into an east and west box and comparing the 
> indexed {{LatLonShape}} fields against each. {{INTERSECTS}}, {{DISJOINT}}, 
> and {{WITHIN}} will all be handled by the {{LatLonShapeQuery}} 
> IntersectVisitor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 3021 - Still Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3021/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:39431/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:39431/solr
at 
__randomizedtesting.SeedInfo.seed([3D77588542DE8167:FC8721296F8E4BC0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:45593/solr

Stack Trace:

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10.0.1) - Build # 7601 - Still Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7601/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerSetPropertiesIntegrationTest.testSetProperties

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A38C70ECB1D78483:C8E8A7A502FA1107]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.autoscaling.TriggerSetPropertiesIntegrationTest.testSetProperties(TriggerSetPropertiesIntegrationTest.java:111)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerSetPropertiesIntegrationTest.testSetProperties

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A38C70ECB1D78483:C8E8A7A502FA1107]:0)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 23139 - Still Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23139/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

6 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:10001_solr, 
127.0.0.1:1_solr] Last available state: 
DocCollection(testMixedBounds_collection//clusterstate.json/20)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   "core":"testMixedBounds_collection_shard2_replica_n3", 
  "SEARCHER.searcher.maxDoc":0,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":10240, 
  "node_name":"127.0.0.1:10001_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":9.5367431640625E-6,   
"SEARCHER.searcher.numDocs":0}, "core_node4":{   
"core":"testMixedBounds_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:1_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"0-7fff",   "state":"active"}, "shard1":{   
"replicas":{ "core_node1":{   
"core":"testMixedBounds_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":0,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":10240, 
  "node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":9.5367431640625E-6,   
"SEARCHER.searcher.numDocs":0}, "core_node2":{   
"core":"testMixedBounds_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10001_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: failed to create testMixedBounds_collection
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:1_solr]
Last available state: 
DocCollection(testMixedBounds_collection//clusterstate.json/20)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{
"shard2":{
  "replicas":{
"core_node3":{
  "core":"testMixedBounds_collection_shard2_replica_n3",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10001_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0},
"core_node4":{
  "core":"testMixedBounds_collection_shard2_replica_n4",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:1_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"0-7fff",
  "state":"active"},
"shard1":{
  "replicas":{
"core_node1":{
  "core":"testMixedBounds_collection_shard1_replica_n1",
  "leader":"true",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:1_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0},
"core_node2":{
  "core":"testMixedBounds_collection_shard1_replica_n2",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10001_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([A6D42AFFDB2B3647:AC57955296903D1D]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 

Re: Lucene/Solr 7.6

2018-11-02 Thread Nicholas Knize
+1 for a 7.7 release to coincide with 8.0 which sounds like will happen
around January time frame.

If needed I can hold off on cutting the 7.6 branch and feature freezing
until Friday of next week. That would still give at least two weeks of
jenkins testing & bug fixing before a target release the last week of
November.

On Thu, Nov 1, 2018 at 8:40 PM Gus Heck  wrote:

> I think SOLR-12891 might want to get into 7.6
>
> On Thu, Nov 1, 2018 at 9:27 PM Erick Erickson 
> wrote:
>
>> H, my off-the-cuff reaction is that this feels too early.
>>
>> In the back of my mind I had the 7.6 release roughly coinciding with
>> 8.0 to tie the 7x code line up in a bow. I suppose it doesn't really
>> matter if it is Solr 7.7 or later if we want something coincident with
>> 8.0 though.
>> On Thu, Nov 1, 2018 at 1:38 PM Nicholas Knize  wrote:
>> >
>> > Hi all,
>> >
>> > To follow up from our discussion on the 8.0 thread, I would like to cut
>> the 7.6 branch on either Tuesday or Wednesday of next week. Since this
>> implies feature freeze I went ahead and had a look at some of the issues
>> that are labeled for 7.6.
>> >
>> > It looks like we only have one active issue listed as a blocker for
>> Solr. The upgrade notes in SOLR-12927
>> >
>> > For Lucene we have five active issues (each with a patch provided)
>> listed as blockers targeted for 7.6.
>> >
>> > If there are any other issues that need to land before cutting the
>> branch, and they are not already labeled, please either mark them as
>> blockers accordingly or let me know prior to cutting the branch next
>> Tuesday or Wednesday.
>> >
>> > Thank you!
>> >
>> > - Nick
>> > --
>> >
>> > Nicholas Knize, Ph.D., GISP
>> > Geospatial Software Guy  |  Elasticsearch
>> > Apache Lucene Committer
>> > nkn...@apache.org
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
> --
> http://www.the111shift.com
>
-- 

Nicholas Knize, Ph.D., GISP
Geospatial Software Guy  |  Elasticsearch
Apache Lucene Committer
nkn...@apache.org


[jira] [Commented] (SOLR-9952) S3BackupRepository

2018-11-02 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673181#comment-16673181
 ] 

Mikhail Khludnev commented on SOLR-9952:


Because that's how I understand the Storage Gateway. I might miss something, 
but for me it's a VM which [one run on 
EC2|https://aws.amazon.com/premiumsupport/knowledge-center/file-gateway-ec2/] 
(or on-prem, but it's irrelevant for the subj).  

> S3BackupRepository
> --
>
> Key: SOLR-9952
> URL: https://issues.apache.org/jira/browse/SOLR-9952
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: 
> 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, 
> 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr 
> on S3.pdf, core-site.xml.template
>
>
> I'd like to have a backup repository implementation allows to snapshot to AWS 
> S3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8546) Fix ant beast to fail and succeed based on whether beasting actually fails or succeeds.

2018-11-02 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673193#comment-16673193
 ] 

Mark Miller commented on LUCENE-8546:
-

Here is a patch that is about ready for commit I think.

> Fix ant beast to fail and succeed based on whether beasting actually fails or 
> succeeds.
> ---
>
> Key: LUCENE-8546
> URL: https://issues.apache.org/jira/browse/LUCENE-8546
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: LUCENE-8546.patch, LUCENE-8546.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12954) facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards

2018-11-02 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673164#comment-16673164
 ] 

Nicholas Knize commented on SOLR-12954:
---

Sounds like we should label as a blocker for 7.6 & 8.0?

> facet.pivot refinement bug when facet.sort=index and mincount > 2*numShards
> ---
>
> Key: SOLR-12954
> URL: https://issues.apache.org/jira/browse/SOLR-12954
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> While testing out SOLR-7804 i discovered a failure in TestCloudPivotFacet 
> that indicates a problem with the refinement of (nested?) pivot facets when 
> {{facet.sort=index}} and {{facet.pivot.mincount > 2*numShards}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Gus Heck as Lucene/Solr committer

2018-11-02 Thread Nicholas Knize
Nice to have you on board Gus!

On Fri, Nov 2, 2018 at 7:00 AM Karl Wright  wrote:

> Welcome!!
> Karl
>
> On Thu, Nov 1, 2018 at 9:53 PM Koji Sekiguchi 
> wrote:
>
>> Welcome Gus!
>>
>> Koji
>>
>> On 2018/11/01 21:22, David Smiley wrote:
>> > Hi all,
>> >
>> > Please join me in welcoming Gus Heck as the latest Lucene/Solr
>> committer!
>> >
>> > Congratulations and Welcome, Gus!
>> >
>> > Gus, it's traditional for you to introduce yourself with a brief bio.
>> >
>> > ~ David
>> > --
>> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>> --

Nicholas Knize, Ph.D., GISP
Geospatial Software Guy  |  Elasticsearch
Apache Lucene Committer
nkn...@apache.org


[jira] [Updated] (LUCENE-8546) Fix ant beast to fail and succeed based on whether beasting actually fails or succeeds.

2018-11-02 Thread Mark Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated LUCENE-8546:

Attachment: LUCENE-8546.patch

> Fix ant beast to fail and succeed based on whether beasting actually fails or 
> succeeds.
> ---
>
> Key: LUCENE-8546
> URL: https://issues.apache.org/jira/browse/LUCENE-8546
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: LUCENE-8546.patch, LUCENE-8546.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8551) Purge unused FieldInfo on segment merge

2018-11-02 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673119#comment-16673119
 ] 

David Smiley commented on LUCENE-8551:
--

Agreed -- if this does not wind up happening automatically, it could be added 
to some other mechanism like SOLR-12259.  I'm not sure yet how much complexity 
it would add to the regular merge.  I'm also not yet sure how much performance 
degradation this is causing my employer... it remains to be measured.  Even 
then, it's a YMMV thing.

> Purge unused FieldInfo on segment merge
> ---
>
> Key: LUCENE-8551
> URL: https://issues.apache.org/jira/browse/LUCENE-8551
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: David Smiley
>Priority: Major
>
> If a field is effectively unused (no norms, terms index, term vectors, 
> docValues, stored value, points index), it will nonetheless hang around in 
> FieldInfos indefinitely.  It would be nice to be able to recognize an unused 
> FieldInfo and allow it to disappear after a merge (or two).
> SegmentMerger merges FieldInfo (from each segment) as nearly the first thing 
> it does.  After that, the different index parts, before it's known what's 
> "used" or not.  After writing, we theoretically know which fields are used or 
> not, though we're not doing any bookkeeping to track it.  Maybe we should 
> track the fields used during writing so we write a filtered merged fieldInfo 
> at the end instead of unfiltered up front?  Or perhaps upon reading a 
> segment, we make it cheap/easy for each index type (e.g. terms index, stored 
> fields, ...) to know which fields have data for the corresponding type.  
> Then, on a subsequent merge, we know up front to filter the FieldInfos.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9952) S3BackupRepository

2018-11-02 Thread Michael Joyner (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673125#comment-16673125
 ] 

Michael Joyner commented on SOLR-9952:
--

Why does it require ec2? (Unless running SolrCloud via Amazon?)

> S3BackupRepository
> --
>
> Key: SOLR-9952
> URL: https://issues.apache.org/jira/browse/SOLR-9952
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: 
> 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, 
> 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr 
> on S3.pdf, core-site.xml.template
>
>
> I'd like to have a backup repository implementation allows to snapshot to AWS 
> S3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.6

2018-11-02 Thread Bram Van Dam
On 02/11/2018 15:41, Nicholas Knize wrote:
> If needed I can hold off on cutting the 7.6 branch and feature freezing
> until Friday of next week. That would still give at least two weeks of
> jenkins testing & bug fixing before a target release the last week of
> November.

If you're cutting 7.6 soon, could you be so kind as to have a look at
including SOLR-12953?

Thanks!

 - Bram


[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 35 - Still Failing

2018-11-02 Thread Apache Jenkins Server
Build: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/35/

6 tests failed.
FAILED:  org.apache.lucene.document.TestLatLonPolygonShapeQueries.testRandomBig

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([47A662266473E970:C0F11FA9F52A95F0]:0)
at 
org.apache.lucene.util.bkd.HeapPointWriter.writePackedValue(HeapPointWriter.java:106)
at 
org.apache.lucene.util.bkd.HeapPointWriter.append(HeapPointWriter.java:127)
at org.apache.lucene.util.bkd.PointReader.split(PointReader.java:74)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1843)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1857)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1857)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1857)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1857)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1870)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1857)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1857)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1857)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1857)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1857)
at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:1022)
at 
org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.writeField(Lucene60PointsWriter.java:131)
at 
org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62)
at org.apache.lucene.codecs.PointsWriter.merge(PointsWriter.java:191)
at 
org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:145)
at 
org.apache.lucene.codecs.asserting.AssertingPointsFormat$AssertingPointsWriter.merge(AssertingPointsFormat.java:142)
at 
org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:201)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:161)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4453)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4075)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2178)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:2011)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1962)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.indexRandomShapes(BaseLatLonShapeTestCase.java:226)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.verify(BaseLatLonShapeTestCase.java:192)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.doTestRandom(BaseLatLonShapeTestCase.java:173)
at 
org.apache.lucene.document.TestLatLonPolygonShapeQueries.testRandomBig(TestLatLonPolygonShapeQueries.java:104)


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space


FAILED:  org.apache.solr.cloud.RestartWhileUpdatingTest.test

Error Message:
There are still nodes recoverying - waited for 320 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 320 
seconds
at 
__randomizedtesting.SeedInfo.seed([E96E652457E36FC1:613A5AFEF91F0239]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:920)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1477)
at 
org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 

[jira] [Commented] (LUCENE-8555) Add dateline crossing support to LatLonShapeBoundingBoxQuery

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673451#comment-16673451
 ] 

ASF subversion and git services commented on LUCENE-8555:
-

Commit 31d7dfe6b1b283e4678d1abd82af9eac680afe45 in lucene-solr's branch 
refs/heads/master from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=31d7dfe ]

LUCENE-8555: Add dateline crossing support to LatLonShapeBoundingBoxQuery


> Add dateline crossing support to LatLonShapeBoundingBoxQuery
> 
>
> Key: LUCENE-8555
> URL: https://issues.apache.org/jira/browse/LUCENE-8555
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: 7.6, master (8.0)
>Reporter: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8555.patch
>
>
> Instead of rewriting into a {{BooleanQuery}}, {{LatLonShapeBoundingBoxQuery}} 
> should handle dateline crossing support directly in the {{IntersectVisitor}}. 
> This feature issue will add support for splitting a 
> {{LatLonShapeBoundingBoxQuery}} into an east and west box and comparing the 
> indexed {{LatLonShape}} fields against each. {{INTERSECTS}}, {{DISJOINT}}, 
> and {{WITHIN}} will all be handled by the {{LatLonShapeQuery}} 
> IntersectVisitor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8555) Add dateline crossing support to LatLonShapeBoundingBoxQuery

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673456#comment-16673456
 ] 

ASF subversion and git services commented on LUCENE-8555:
-

Commit f9598f335b751d095a3728ba55f50b6753456040 in lucene-solr's branch 
refs/heads/branch_7x from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9598f335 ]

LUCENE-8555: Add dateline crossing support to LatLonShapeBoundingBoxQuery


> Add dateline crossing support to LatLonShapeBoundingBoxQuery
> 
>
> Key: LUCENE-8555
> URL: https://issues.apache.org/jira/browse/LUCENE-8555
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: 7.6, master (8.0)
>Reporter: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8555.patch
>
>
> Instead of rewriting into a {{BooleanQuery}}, {{LatLonShapeBoundingBoxQuery}} 
> should handle dateline crossing support directly in the {{IntersectVisitor}}. 
> This feature issue will add support for splitting a 
> {{LatLonShapeBoundingBoxQuery}} into an east and west box and comparing the 
> indexed {{LatLonShape}} fields against each. {{INTERSECTS}}, {{DISJOINT}}, 
> and {{WITHIN}} will all be handled by the {{LatLonShapeQuery}} 
> IntersectVisitor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-02 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673477#comment-16673477
 ] 

David Smiley commented on SOLR-12878:
-

Excellent work Tim!  Lets not touch Lucene in this issue; okay?  Create a 
separate LUCENE issue for that if you like.  I don't think it's a big deal if 
the FieldInfos returned is the same or not provided that it's a fast call.

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-02 Thread Alan Woodward
Congratulations and welcome, Tim!

> On 2 Nov 2018, at 16:20, Erick Erickson  wrote:
> 
> Hi all,
> 
> Please join me in welcoming Tim Allison as the latest Lucene/Solr committer!
> 
> Congratulations and Welcome, Tim!
> 
> It's traditional for you to introduce yourself with a brief bio.
> 
> Erick
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query

2018-11-02 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673440#comment-16673440
 ] 

David Smiley commented on SOLR-7964:


I think I'm unclear why it's necessary to put the highlighted key into the 
regular key of LookupResult.  Instead, isn't it sufficient to modify 
SuggestComponent.toNamedList (line ~423) to look for a highlighted key if 
found?  If someone works with me on this, I can help get a solution committed.

> suggest.highlight=true does not work when using context filter query
> 
>
> Key: SOLR-7964
> URL: https://issues.apache.org/jira/browse/SOLR-7964
> Project: Solr
>  Issue Type: Improvement
>  Components: Suggester
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Assignee: David Smiley
>Priority: Minor
>  Labels: suggester
> Attachments: SOLR-7964.patch, SOLR_7964.patch, SOLR_7964.patch
>
>
> When using the new suggester context filtering query param 
> {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param 
> {{suggest.highlight=true}} has no effect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-02 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-12878:
---

Assignee: David Smiley

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8553) New KoreanDecomposeFilter for KoreanAnalyzer(Nori)

2018-11-02 Thread Namgyu Kim (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673285#comment-16673285
 ] 

Namgyu Kim commented on LUCENE-8553:


Thank you for your comments :D [~rcmuir], [~thetaphi].

 

Yes. Both of you are right.

I know that it is possible to do "Hangul-Jamo" separation by using ICU.

However, I am not sure whether the *"Hangul" -> "Choseong"* conversion or 
*"Dual chars (like" ㄲ "," ㅆ "," ㅢ ", ...)"* conversion can be performed in that 
library.

These functions are also important features in this TokenFilter and I have used 
a HashMap or a separated Array to reduce its time complexity.

That's why I didn't use the ICU library.

> New KoreanDecomposeFilter for KoreanAnalyzer(Nori)
> --
>
> Key: LUCENE-8553
> URL: https://issues.apache.org/jira/browse/LUCENE-8553
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Namgyu Kim
>Priority: Major
> Attachments: LUCENE-8553.patch
>
>
> This is a patch for KoreanDecomposeFilter.
> This filter can be used to decompose Hangul.
> (ex) 한글 -> ㅎㄱ or ㅎㅏㄴㄱㅡㄹ)
> Hangul input is very unique.
> If you want to type apple in English,
>    you can type it in the order {color:#FF}a -> p -> p -> l -> e{color}.
> However, if you want to input "Hangul" in Hangul,
>    you have to type it in the order of {color:#FF}ㅎ -> ㅏ -> ㄴ -> ㄱ -> ㅡ 
> -> ㄹ{color}.
>    (Because of the keyboard shape)
> This means that spell check with existing full Hangul can be less accurate.
>  
> The structure of Hangul consists of elements such as *"Choseong"*, 
> *"Jungseong"*, and *"Jongseong"*.
> These three elements are called *"Jamo"*.
> If you have the Korean word "된장찌개" (that means Soybean Paste Stew)
> *"Choseong"* means {color:#FF}"ㄷ, ㅈ, ㅉ, ㄱ"{color},
> *"Jungseong"* means {color:#FF}"ㅚ, ㅏ, ㅣ, ㅐ"{color},
> *"Jongseong"* means {color:#FF}"ㄴ, ㅇ"{color}.
> The reason for Jamo separation is explained above. (spell check)
> Also, the reason we need "Choseong Filter" is because many Koreans use 
> *"Choseong Search"* (especially in mobile environment).
> If you want to search for "된장찌개" you need 10 typing, which is quite a lot.
> For that reason, I think it would be useful to provide a filter that can be 
> searched by "ㄷㅈㅉㄱ".
> Hangul also has *dual chars*, such as
> "ㄲ, ㄸ, ㅁ, ㅃ, ㅉ, ㅚ (ㅗ + ㅣ), ㅢ (ㅡ + ㅣ), ...".
> For such reasons,
> KoreanDecompose offers *5 options*,
> ex) *된장찌개* => [된장], [찌개]
> *1) ORIGIN*
> [된장], [찌개]
> *2) SINGLECHOSEONG*
> [ㄷㅈ], [ㅉㄱ] 
> *3) DUALCHOSEONG*
> [ㄷㅈ], [ㅈㅈㄱ] 
> *4) SINGLEJAMO*
> [ㄷㅚㄴㅈㅏㅇ], [ㅉㅣㄱㅐ] 
> *5) DUALJAMO*
> [ㄷㅗㅣㄴㅈㅏㅇ], [ㅈㅈㅣㄱㅐ] 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-02 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673346#comment-16673346
 ] 

Tim Underwood commented on SOLR-12878:
--

I've updated commit #3 of the PR with the changes to those ReaderWrapper 
classes and also added a JavaDoc note to LeafReader.getFieldInfo() that the 
instances should be cached by implementations of LeafReader.

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-02 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673290#comment-16673290
 ] 

Tim Underwood edited comment on SOLR-12878 at 11/2/18 4:13 PM:
---

Sure.  I've updated the pull request with what I'm currently playing with:  
[https://github.com/apache/lucene-solr/pull/473]

There are currently 3 commits in there:

1 - The original FacetFieldProcessorByHashDV.java change to avoid calling 
getSlowAtomicReader

2 - The change requested by [~dsmiley] to move the caching of FieldInfos from 
SolrIndexSearcher to 

SlowCompositeReaderWrapper

3 - Adding a check in TestUtil.checkReader to verify that 
LeafReader.getFieldInfos() returns a cached copy along with the changes 
required to make that pass.  Specifically there are several places that 
construct an empty FieldInfos instance so I just created a static 
FieldInfos.EMPTY instance that can be referenced.  Also, MemoryIndexReader 
needed to be modified to cache a copy of its FieldInfos.  The constructor was 
already looping over the fields so I just added it there (vs creating it 
lazily).

 

What are your thoughts on #3?  Is it a good idea to require LeafReader 
instances to cache their FieldInfos?

It seems like something like this is a common pattern across the codebase (both 
Lucene and Solr):
{code:java}
reader.getFieldInfos().fieldInfo(field)
{code}
So it might be desirable to make sure FieldInfos isn't always being recomputed?

 

I'm still verifying that I've checked that all LeafReader.getFieldInfos() 
implementations perform the caching and that all tests pass (I'm seeing a few 
failures but they seem unrelated). 


was (Author: tpunder):
Sure.  I've updated the pull request with what I'm currently playing with:  
[https://github.com/apache/lucene-solr/pull/473]

There are currently 3 commits in there:

1 - The original FacetFieldProcessorByHashDV.java change to avoid calling 
getSlowAtomicReader

2 - The change requested by [~dsmiley] to move the caching of FieldInfos from 
SolrIndexSearcher to 

SlowCompositeReaderWrapper

3 - Adding a check in TestUtil.checkReader to verify that 
LeafReader.getFieldInfos() returns a cached copy along with the changes 
required to make that pass.  Specifically there are several places that 
construct an empty FieldInfos instance so I just created a static 
FieldInfos.EMPTY instance that can be referenced.  Also, MemoryIndexReader 
needed to be modified to cache a copy of its FieldInfos.  The constructor was 
already looping over the fields so I just added it there (vs creating it 
lazily).

 

What are your thoughts on #3?  Is it a good idea to require LeafReader 
instances to cache their FieldInfos?

It seems like something like this is a common pattern across the codebase (both 
Lucene and Solr):
{code:java}
reader.getFieldInfos().fieldInfo(field)
{code}
So it might be desirable to make sure FieldInfos isn't always being recomputed?

 

I'm still verifying that I've checked that all LeafReader.getFieldInfos() 
implementations perform the caching and that all tests pass (I'm seeing a few 
failures but they seem unrelated).

 

 

 

 

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Gus Heck as Lucene/Solr committer

2018-11-02 Thread Varun Thacker
Congratulations and welcome Gus!

On Thu, Nov 1, 2018 at 5:22 AM David Smiley 
wrote:

> Hi all,
>
> Please join me in welcoming Gus Heck as the latest Lucene/Solr committer!
>
> Congratulations and Welcome, Gus!
>
> Gus, it's traditional for you to introduce yourself with a brief bio.
>
> ~ David
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


[jira] [Commented] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-02 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673499#comment-16673499
 ] 

Tim Underwood commented on SOLR-12878:
--

Sure.  What's the next step?  Would you like me to squish commits 1 and 2 or 
just leave as-is?  Should I break out the ReaderWrapper changes from commit 3 
or just completely leave out commit 3 for now?

As long as the SlowCompositeReaderWrapper caching of FieldInfos makes it in I'm 
happy :) . It would be great for that change to make it in time for 7.6 since 
that would make it possible for me to move all of my faceting over to the JSON 
facets.

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-02 Thread Anshum Gupta
Congratulations and welcome, Tim! :) 

  Anshum


> On Nov 2, 2018, at 9:20 AM, Erick Erickson  wrote:
> 
> Hi all,
> 
> Please join me in welcoming Tim Allison as the latest Lucene/Solr committer!
> 
> Congratulations and Welcome, Tim!
> 
> It's traditional for you to introduce yourself with a brief bio.
> 
> Erick
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



Re: lucene-solr:jira/gradle: Parallel running tests

2018-11-02 Thread David Smiley
There's no real standard; just people doing what they like and observing
what others do.

Note that commits to branches following the pattern (lucene|solr).*  (i.e.
that which start with "lucene" or "solr") will *not* get an automated
comment on corresponding JIRA issues.  All others continue to.  ASF infra
got this done for us: https://issues.apache.org/jira/browse/INFRA-11198

I recommend you start a branch with "solr" or "SOLR" if you are going to
work on a Solr issue.  This way if you merge in changes from master, you
won't spam the related issues with comments.

~ David


On Fri, Nov 2, 2018 at 7:46 AM Gus Heck  wrote:

> I'm curious about the branch naming here. I notice this is jira/ and there
> are several other such heads in the repository. What's the convention or
> significance here for this jira/ prefix?
>
> On Fri, Nov 2, 2018 at 6:12 AM  wrote:
>
>> Repository: lucene-solr
>> Updated Branches:
>>   refs/heads/jira/gradle c9cb4fe96 -> 4a12fffb7
>>
>>
>> Parallel running tests
>>
>>
>> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
>> Commit:
>> http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/4a12fffb
>> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/4a12fffb
>> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/4a12fffb
>>
>> Branch: refs/heads/jira/gradle
>> Commit: 4a12fffb751078c2dfdf427617dd5ed9c52c7378
>> Parents: c9cb4fe
>> Author: Cao Manh Dat 
>> Authored: Fri Nov 2 10:11:47 2018 +
>> Committer: Cao Manh Dat 
>> Committed: Fri Nov 2 10:11:47 2018 +
>>
>> --
>>  build.gradle | 6 +-
>>  1 file changed, 5 insertions(+), 1 deletion(-)
>> --
>>
>>
>>
>> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4a12fffb/build.gradle
>> --
>> diff --git a/build.gradle b/build.gradle
>> index df21ce8..27a351d 100644
>> --- a/build.gradle
>> +++ b/build.gradle
>> @@ -30,6 +30,10 @@ subprojects {
>> systemProperty 'java.security.egd',
>> 'file:/dev/./urandom'
>> }
>> }
>> +   tasks.withType(Test) {
>> +   maxParallelForks = Runtime.runtime.availableProcessors()
>> / 2
>> +   }
>> +
>>  }
>>
>>  // These versions are defined here because they represent
>> @@ -308,4 +312,4 @@ ext.library = [
>> xz: "org.tukaani:xz:1.8",
>> morfologik_ukrainian_search:
>> "ua.net.nlp:morfologik-ukrainian-search:3.9.0",
>> xercesImpl: "xerces:xercesImpl:2.9.1"
>> -]
>> \ No newline at end of file
>> +]
>>
>>
>
> --
> http://www.the111shift.com
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-02 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673290#comment-16673290
 ] 

Tim Underwood commented on SOLR-12878:
--

Sure.  I've updated the pull request with what I'm currently playing with:  
[https://github.com/apache/lucene-solr/pull/473]

There are currently 3 commits in there:

1 - The original FacetFieldProcessorByHashDV.java change to avoid calling 
getSlowAtomicReader

2 - The change requested by [~dsmiley] to move the caching of FieldInfos from 
SolrIndexSearcher to 

SlowCompositeReaderWrapper

3 - Adding a check in TestUtil.checkReader to verify that 
LeafReader.getFieldInfos() returns a cached copy along with the changes 
required to make that pass.  Specifically there are several places that 
construct an empty FieldInfos instance so I just created a static 
FieldInfos.EMPTY instance that can be referenced.  Also, MemoryIndexReader 
needed to be modified to cache a copy of its FieldInfos.  The constructor was 
already looping over the fields so I just added it there (vs creating it 
lazily).

 

What are your thoughts on #3?  Is it a good idea to require LeafReader 
instances to cache their FieldInfos?

It seems like something like this is a common pattern across the codebase (both 
Lucene and Solr):
{code:java}
reader.getFieldInfos().fieldInfo(field)
{code}
So it might be desirable to make sure FieldInfos isn't always being recomputed?

 

I'm still verifying that I've checked that all LeafReader.getFieldInfos() 
implementations perform the caching and that all tests pass (I'm seeing a few 
failures but they seem unrelated).

 

 

 

 

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-02 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673334#comment-16673334
 ] 

Tim Underwood commented on SOLR-12878:
--

There are 2 other LeafReader implementations not caching their FieldInfos:
 * org.apache.solr.search.CollapsingQParserPlugin.ReaderWrapper
 * org.apache.solr.handler.component.ExpandComponent.ReaderWrapper

Both have code that is basically this:
{code:java}
@SuppressWarnings("resource") LeafReader uninvertingReader = 
UninvertingReader.wrap(
new ReaderWrapper(searcher.getSlowAtomicReader(), field),
Collections.singletonMap(field, UninvertingReader.Type.SORTED)::get);
{code}
Note the searcher.getSlowAtomicReader() call which means moving the caching of 
FieldInfos into SlowCompositeReaderWrapper will help in this case!

Here is part of the code from UninvertingReader.wrap:
{code:java}
public static LeafReader wrap(LeafReader in, Function mapping) {
  boolean wrap = false;

  // Calculate a new FieldInfos that has DocValuesType where we didn't before
  ArrayList newFieldInfos = new 
ArrayList<>(in.getFieldInfos().size());
  for (FieldInfo fi : in.getFieldInfos()) {
DocValuesType type = fi.getDocValuesType();
{code}
Note the 2 immediate calls to in.getFieldInfos().  So... I'll add the FieldInfo 
caching to both of those ReaderWrapper classes since it would help.

 

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 867 - Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/867/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

11 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([7427D4E810C9C46D:FC73EB32BE35A995]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:146)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.MoveReplicaTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([7427D4E810C9C46D:FC73EB32BE35A995]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)

Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-02 Thread Kevin Risden
Congrats and welcome!

Kevin Risden


On Fri, Nov 2, 2018 at 12:40 PM Tim Allison  wrote:

> Thank you, Erick!  And, thank you, team!
>
> A bit about me...
>
> I've been working in natural language processing since 2002. Over the
> last 5+ years, I've focused on advanced search and content/metadata
> extraction. Many years ago, David Smiley inspired me "to quit forking
> and start patching" and, generally, to grow up and participate in open
> source communities.  I'm now the chair/V.P. of Apache Tika, a
> committer and PMC member on Apache POI and Apache PDFBox, and I'm a
> member of the ASF.  I am passionate about relevance
> engineering/evaluation, and more generally, about testing and
> evaluation to protect our codebases from my code[1][2].  In a former
> life, I was a professor of Latin and Ancient Greek[3].
>
> I am so very grateful to receive this honor.
>
> Thank you!
>
> Cheers,
>
> Tim
>
> [1]
> http://openpreservation.org/blog/2016/10/04/apache-tikas-regression-corpus-tika-1302
> [2]
> https://www.youtube.com/playlist?list=PLbzoR-pLrL6pLDCyPxByWQwYTL-JrF5Rp
> [3]
> https://books.google.com/books/about/Aeschylean_stylistics.html?id=0wweAQAAMAAJ
> On Fri, Nov 2, 2018 at 12:20 PM Erick Erickson 
> wrote:
> >
> > Hi all,
> >
> > Please join me in welcoming Tim Allison as the latest Lucene/Solr
> committer!
> >
> > Congratulations and Welcome, Tim!
> >
> > It's traditional for you to introduce yourself with a brief bio.
> >
> > Erick
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-12932) ant test (without badapples=false) should pass easily for developers.

2018-11-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673409#comment-16673409
 ] 

Kevin Risden commented on SOLR-12932:
-

Last few builds have been clean. This most recent one had a new failure.

Commit: 91b202bad89a94d40021251e026c582f695aad69

Test: MathExpressionTest#testDistributions

 
{code:java}
reproduce with: ant test  -Dtestcase=MathExpressionTest 
-Dtests.method=testDistributions -Dtests.seed=F7286DC596D3A9BA 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-VE 
-Dtests.timezone=Pacific/Pago_Pago -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.09s J1 | MathExpressionTest.testDistributions <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F7286DC596D3A9BA:48D72C6F48294926]:0)
   [junit4]>at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testDistributions(MathExpressionTest.java:1704)
   [junit4]>at java.lang.Thread.run(Thread.java:748){code}
 

> ant test (without badapples=false) should pass easily for developers.
> -
>
> Key: SOLR-12932
> URL: https://issues.apache.org/jira/browse/SOLR-12932
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> If we fix the tests we will end up here anyway, but we can shortcut this.
> Once I get my first patch in, anyone who mentions a test that fails locally 
> for them at any time (not jenkins), I will fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12932) ant test (without badapples=false) should pass easily for developers.

2018-11-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673409#comment-16673409
 ] 

Kevin Risden edited comment on SOLR-12932 at 11/2/18 4:56 PM:
--

Last few builds have been clean. This most recent one had a new failure.

Commit: 91b202bad89a94d40021251e026c582f695aad69

Test: MathExpressionTest#testDistributions
{code:java}
reproduce with: ant test  -Dtestcase=MathExpressionTest 
-Dtests.method=testDistributions -Dtests.seed=F7286DC596D3A9BA 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-VE 
-Dtests.timezone=Pacific/Pago_Pago -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.09s J1 | MathExpressionTest.testDistributions <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F7286DC596D3A9BA:48D72C6F48294926]:0)
   [junit4]>at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testDistributions(MathExpressionTest.java:1704)
   [junit4]>at java.lang.Thread.run(Thread.java:748){code}
 Reference: risdenk/nuc#108


was (Author: risdenk):
Last few builds have been clean. This most recent one had a new failure.

Commit: 91b202bad89a94d40021251e026c582f695aad69

Test: MathExpressionTest#testDistributions

 
{code:java}
reproduce with: ant test  -Dtestcase=MathExpressionTest 
-Dtests.method=testDistributions -Dtests.seed=F7286DC596D3A9BA 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-VE 
-Dtests.timezone=Pacific/Pago_Pago -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.09s J1 | MathExpressionTest.testDistributions <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F7286DC596D3A9BA:48D72C6F48294926]:0)
   [junit4]>at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testDistributions(MathExpressionTest.java:1704)
   [junit4]>at java.lang.Thread.run(Thread.java:748){code}
 

> ant test (without badapples=false) should pass easily for developers.
> -
>
> Key: SOLR-12932
> URL: https://issues.apache.org/jira/browse/SOLR-12932
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> If we fix the tests we will end up here anyway, but we can shortcut this.
> Once I get my first patch in, anyone who mentions a test that fails locally 
> for them at any time (not jenkins), I will fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8554) Add new LatLonShapeLineQuery

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673415#comment-16673415
 ] 

ASF subversion and git services commented on LUCENE-8554:
-

Commit a00cc3be72bbb39430f6b895a4d29a26bce4f6b4 in lucene-solr's branch 
refs/heads/branch_7x from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a00cc3b ]

LUCENE-8554: Add new LatLonShapeLineQuery that queries indexed LatLonShape 
fields by arbitrary lines


> Add new LatLonShapeLineQuery
> 
>
> Key: LUCENE-8554
> URL: https://issues.apache.org/jira/browse/LUCENE-8554
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: 7.6, master (8.0)
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8554.patch, LUCENE-8554.patch
>
>
> Its often useful to be able to query a shape index for documents that either 
> {{INTERSECT}} or are {{DISJOINT}} from a given {{LINESTRING}}. Occasionally 
> the linestring of interest may also have a distance component, which creates 
> a *buffered query* (often used in routing, or shape snapping). This feature 
> first adds a new {{LatLonShapeLineQuery}} for querying  {{LatLonShape}} 
> fields by arbitrary lines. A distance component can then be added in a future 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12882) Eliminate excessive lambda allocation in FacetFieldProcessorByHashDV.collectValFirstPhase

2018-11-02 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673298#comment-16673298
 ] 

Tim Underwood commented on SOLR-12882:
--

Thanks for merging!

> Eliminate excessive lambda allocation in 
> FacetFieldProcessorByHashDV.collectValFirstPhase
> -
>
> Key: SOLR-12882
> URL: https://issues.apache.org/jira/browse/SOLR-12882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.6
>
> Attachments: 
> start-2018-10-31_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png,
>  start_-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV.collectValFirstPhase method looks like this:
> {noformat}
> private void collectValFirstPhase(int segDoc, long val) throws IOException {
>  int slot = table.add(val); // this can trigger a rehash
>  // Our countAcc is virtual, so this is not needed:
>  // countAcc.incrementCount(slot, 1);
> super.collectFirstPhase(segDoc, slot, slotNum ->
> { Comparable value = calc.bitsToValue(val); return new 
> SlotContext(sf.getType().getFieldQuery(null, sf, calc.formatValue(value))); }
> );
> }
> {noformat}
>  
> For each value that is being iterated over there is a lambda allocation that 
> is passed as the slotContext argument to the super.collectFirstPhase method. 
> The lambda can be factored out such that there is only a single instance per 
> FacetFieldProcessorByHashDV instance. 
> The only tradeoff being that the value needs to be looked up from the table 
> in the lambda.  However looking the value up in the table is going to be less 
> expensive than a memory allocation and also the slotContext lambda is only 
> used in RelatednessAgg and not for any of the field faceting or metrics.
>  
> {noformat}
> private void collectValFirstPhase(int segDoc, long val) throws IOException {
>   int slot = table.add(val); // this can trigger a rehash
>   // Our countAcc is virtual, so this is not needed:
>   // countAcc.incrementCount(slot, 1);
>   super.collectFirstPhase(segDoc, slot, slotContext);
> }
> /**
>  * SlotContext to use during all {@link SlotAcc} collection.
>  *
>  * This avoids a memory allocation for each invocation of 
> collectValFirstPhase.
>  */
> private IntFunction slotContext = (slotNum) -> {
>   long val = table.vals[slotNum];
>   Comparable value = calc.bitsToValue(val);
>   return new SlotContext(sf.getType().getFieldQuery(null, sf, 
> calc.formatValue(value)));
> };
> {noformat}
>  
> FacetFieldProcessorByArray already follows this same pattern



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12875) ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets

2018-11-02 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673301#comment-16673301
 ] 

Tim Underwood commented on SOLR-12875:
--

[~mkhludnev] Thanks for merging this!

> ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets
> 
>
> Key: SOLR-12875
> URL: https://issues.apache.org/jira/browse/SOLR-12875
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12875.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some 
> requests when trying to make use of
> {noformat}
> uniqueBlock(_root_){noformat}
> within JSON Facets.
> Here are some example Stack Traces:
> {noformat}
> 2018-10-12 14:08:50.587 ERROR (qtp215078753-3353) [   x:my_core] 
> o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 13 
> out of bounds for length 8
> at 
> org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40)
> at 
> org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85)
> at 
> org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243)
> at 
> org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432)
> at 
> org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorByHashDV.java:50)
> at 
> org.apache.solr.search.facet.FacetFieldProcessorByHashDV$5.collect(FacetFieldProcessorByHashDV.java:395)
> at 
> org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:284)
> at 
> org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:376)
> at 
> org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:247)
> at 
> org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)
> at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
> at 
> org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
> at 
> org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
> at 
> org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
> at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
> at 
> org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> {noformat}
>  
> Here is another one at a different location in UniqueBlockAgg:
>   
> {noformat}
> 2018-10-12 21:37:57.322 ERROR (qtp215078753-4072) [   x:my_core] 
> o.a.s.h.RequestHandlerBase java.lang.ArrayIndexOutOfBoundsException: Index 23 
> out of bounds for length 16
> at 
> org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.getValue(UniqueBlockAgg.java:59)
> at org.apache.solr.search.facet.SlotAcc.setValues(SlotAcc.java:146)
> at 
> org.apache.solr.search.facet.FacetFieldProcessor.fillBucket(FacetFieldProcessor.java:431)
> at 
> org.apache.solr.search.facet.FacetFieldProcessor.findTopSlots(FacetFieldProcessor.java:381)
> at 
> org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:249)
> at 
> org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)
> at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
> at 
> org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
> at 
> org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
> at 
> org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
> at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
> at 
> org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> at 
> 

Welcome Tim Allison as a Lucene/Solr committer

2018-11-02 Thread Erick Erickson
Hi all,

Please join me in welcoming Tim Allison as the latest Lucene/Solr committer!

Congratulations and Welcome, Tim!

It's traditional for you to introduce yourself with a brief bio.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-02 Thread David Smiley
Welcome Tim!

On Fri, Nov 2, 2018 at 12:20 PM Erick Erickson 
wrote:

> Hi all,
>
> Please join me in welcoming Tim Allison as the latest Lucene/Solr
> committer!
>
> Congratulations and Welcome, Tim!
>
> It's traditional for you to introduce yourself with a brief bio.
>
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (LUCENE-8554) Add new LatLonShapeLineQuery

2018-11-02 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673404#comment-16673404
 ] 

ASF subversion and git services commented on LUCENE-8554:
-

Commit 0cbefe8b25044a0f565c8491bda86626f2eddf5e in lucene-solr's branch 
refs/heads/master from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0cbefe8 ]

LUCENE-8554: Add new LatLonShapeLineQuery that queries indexed LatLonShape 
fields by arbitrary lines


> Add new LatLonShapeLineQuery
> 
>
> Key: LUCENE-8554
> URL: https://issues.apache.org/jira/browse/LUCENE-8554
> Project: Lucene - Core
>  Issue Type: New Feature
>Affects Versions: 7.6, master (8.0)
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8554.patch, LUCENE-8554.patch
>
>
> Its often useful to be able to query a shape index for documents that either 
> {{INTERSECT}} or are {{DISJOINT}} from a given {{LINESTRING}}. Occasionally 
> the linestring of interest may also have a distance component, which creates 
> a *buffered query* (often used in routing, or shape snapping). This feature 
> first adds a new {{LatLonShapeLineQuery}} for querying  {{LatLonShape}} 
> fields by arbitrary lines. A distance component can then be added in a future 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-02 Thread Joel Bernstein
Welcome Tim!

On Fri, Nov 2, 2018 at 12:47 PM Kevin Risden  wrote:

> Congrats and welcome!
>
> Kevin Risden
>
>
> On Fri, Nov 2, 2018 at 12:40 PM Tim Allison  wrote:
>
>> Thank you, Erick!  And, thank you, team!
>>
>> A bit about me...
>>
>> I've been working in natural language processing since 2002. Over the
>> last 5+ years, I've focused on advanced search and content/metadata
>> extraction. Many years ago, David Smiley inspired me "to quit forking
>> and start patching" and, generally, to grow up and participate in open
>> source communities.  I'm now the chair/V.P. of Apache Tika, a
>> committer and PMC member on Apache POI and Apache PDFBox, and I'm a
>> member of the ASF.  I am passionate about relevance
>> engineering/evaluation, and more generally, about testing and
>> evaluation to protect our codebases from my code[1][2].  In a former
>> life, I was a professor of Latin and Ancient Greek[3].
>>
>> I am so very grateful to receive this honor.
>>
>> Thank you!
>>
>> Cheers,
>>
>> Tim
>>
>> [1]
>> http://openpreservation.org/blog/2016/10/04/apache-tikas-regression-corpus-tika-1302
>> [2]
>> https://www.youtube.com/playlist?list=PLbzoR-pLrL6pLDCyPxByWQwYTL-JrF5Rp
>> [3]
>> https://books.google.com/books/about/Aeschylean_stylistics.html?id=0wweAQAAMAAJ
>> On Fri, Nov 2, 2018 at 12:20 PM Erick Erickson 
>> wrote:
>> >
>> > Hi all,
>> >
>> > Please join me in welcoming Tim Allison as the latest Lucene/Solr
>> committer!
>> >
>> > Congratulations and Welcome, Tim!
>> >
>> > It's traditional for you to introduce yourself with a brief bio.
>> >
>> > Erick
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


[jira] [Commented] (LUCENE-8374) Reduce reads for sparse DocValues

2018-11-02 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673418#comment-16673418
 ] 

Tim Underwood commented on LUCENE-8374:
---

Yes, for that index I think almost everything[1] is indexed as Int IDs and then 
the entities they represent are looked up and converted to strings before being 
displayed on the front end.  I don't think I ever considered or tried String 
fields for those since the Int IDs are a natural fit.  We also have a few cases 
where we attempt to apply translations to things like the Part Type dropdown so 
we need to know the ID anyways (e.g. 
[https://www.opticatonline.com/search?bv=18220=usa=es-MX).]

My other index[2] (with ~8 million docs) makes more use of String fields but 
that is mostly due to not using parent/child docs and needing to make use of 
facet prefix filtering to match values for a specific vehicle.  For example a 
value might look like "5411/1004" where "5411" represents the id of the vehicle 
I'm filtered to and "1004" represents the type of part.  If I ever convert that 
index to parent/child docs then I could convert a lot of those fields to ints.

 

[1] - The Brands are actually indexed as their 4 character ID string (e.g. 
"BBHK" for the brand "Bosch")

[2] - I don't think I have any good non-login protected examples of this index. 
 This one has a very limited view on the data (if you have a non North American 
country selected):  
[https://propartsnet.opticatonline.com/search?ltt=pc=5411=DK=DK=da=100019]
 . It works very similar to the 
[www.opticatonline.com|http://www.opticatonline.com/] site except uses 
different underlying data for the non US/MX/CA countries.

> Reduce reads for sparse DocValues
> -
>
> Key: LUCENE-8374
> URL: https://issues.apache.org/jira/browse/LUCENE-8374
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 7.5, master (8.0)
>Reporter: Toke Eskildsen
>Priority: Major
>  Labels: performance
> Fix For: 7.6
>
> Attachments: LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374.patch, 
> LUCENE-8374.patch, LUCENE-8374.patch, LUCENE-8374_branch_7_3.patch, 
> LUCENE-8374_branch_7_3.patch.20181005, LUCENE-8374_branch_7_4.patch, 
> LUCENE-8374_branch_7_5.patch, entire_index_logs.txt, 
> image-2018-10-24-07-30-06-663.png, image-2018-10-24-07-30-56-962.png, 
> single_vehicle_logs.txt, 
> start-2018-10-24-1_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png,
>  
> start-2018-10-24_snapshot___Users_tim_Snapshots__-_YourKit_Java_Profiler_2017_02-b75_-_64-bit.png
>
>
> The {{Lucene70DocValuesProducer}} has the internal classes 
> {{SparseNumericDocValues}} and {{BaseSortedSetDocValues}} (sparse code path), 
> which again uses {{IndexedDISI}} to handle the docID -> value-ordinal lookup. 
> The value-ordinal is the index of the docID assuming an abstract tightly 
> packed monotonically increasing list of docIDs: If the docIDs with 
> corresponding values are {{[0, 4, 1432]}}, their value-ordinals will be {{[0, 
> 1, 2]}}.
> h2. Outer blocks
> The lookup structure of {{IndexedDISI}} consists of blocks of 2^16 values 
> (65536), where each block can be either {{ALL}}, {{DENSE}} (2^12 to 2^16 
> values) or {{SPARSE}} (< 2^12 values ~= 6%). Consequently blocks vary quite a 
> lot in size and ordinal resolving strategy.
> When a sparse Numeric DocValue is needed, the code first locates the block 
> containing the wanted docID flag. It does so by iterating blocks one-by-one 
> until it reaches the needed one, where each iteration requires a lookup in 
> the underlying {{IndexSlice}}. For a common memory mapped index, this 
> translates to either a cached request or a read operation. If a segment has 
> 6M documents, worst-case is 91 lookups. In our web archive, our segments has 
> ~300M values: A worst-case of 4577 lookups!
> One obvious solution is to use a lookup-table for blocks: A long[]-array with 
> an entry for each block. For 6M documents, that is < 1KB and would allow for 
> direct jumping (a single lookup) in all instances. Unfortunately this 
> lookup-table cannot be generated upfront when the writing of values is purely 
> streaming. It can be appended to the end of the stream before it is closed, 
> but without knowing the position of the lookup-table the reader cannot seek 
> to it.
> One strategy for creating such a lookup-table would be to generate it during 
> reads and cache it for next lookup. This does not fit directly into how 
> {{IndexedDISI}} currently works (it is created anew for each invocation), but 
> could probably be added with a little work. An advantage to this is that this 
> does not change the underlying format and thus could be used with existing 
> indexes.
> h2. The lookup structure 

[jira] [Commented] (SOLR-9952) S3BackupRepository

2018-11-02 Thread Michael Joyner (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673331#comment-16673331
 ] 

Michael Joyner commented on SOLR-9952:
--

Found one article and removed it.



> S3BackupRepository
> --
>
> Key: SOLR-9952
> URL: https://issues.apache.org/jira/browse/SOLR-9952
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: 
> 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, 
> 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr 
> on S3.pdf, core-site.xml.template
>
>
> I'd like to have a backup repository implementation allows to snapshot to AWS 
> S3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23140 - Still Unstable!

2018-11-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23140/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseParallelGC

51 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.graph.GraphTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.graph.GraphTest: 1) Thread[id=2105, 
name=zkConnectionManagerCallback-431-thread-1, state=WAITING, 
group=TGRP-GraphTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)2) 
Thread[id=2109, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphTest] at java.base@12-ea/java.lang.Thread.sleep(Native 
Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)3) 
Thread[id=2102, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphTest] at java.base@12-ea/java.lang.Thread.sleep(Native 
Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)4) 
Thread[id=2108, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphTest] at java.base@12-ea/java.lang.Thread.sleep(Native 
Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)5) 
Thread[id=2103, 
name=ShortestPathStream-426-thread-1-SendThread(127.0.0.1:44917), 
state=TIMED_WAITING, group=TGRP-GraphTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1054)6) 
Thread[id=2104, name=ShortestPathStream-426-thread-1-EventThread, 
state=WAITING, group=TGRP-GraphTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 6 threads leaked from SUITE 
scope at org.apache.solr.client.solrj.io.graph.GraphTest: 
   1) Thread[id=2105, name=zkConnectionManagerCallback-431-thread-1, 
state=WAITING, group=TGRP-GraphTest]
at java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
   2) Thread[id=2109, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphTest]
at java.base@12-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
   3) Thread[id=2102, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphTest]
at java.base@12-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
   4) Thread[id=2108, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphTest]
at java.base@12-ea/java.lang.Thread.sleep(Native 

Re: lucene-solr:jira/gradle: Parallel running tests

2018-11-02 Thread Erick Erickson
It's not necessary to make a patch, especially if the change doesn't
need much collaboration. It's perfectly acceptable to make a patch and
attach it to the JIRA like the old days. Whichever you're most
comfortable with.

You've probably inferred that I'm one of the folks that had to be
dragged kicking and screaming into the modern Git days. ;)

Erick
On Fri, Nov 2, 2018 at 8:43 AM David Smiley  wrote:
>
> There's no real standard; just people doing what they like and observing what 
> others do.
>
> Note that commits to branches following the pattern (lucene|solr).*  (i.e. 
> that which start with "lucene" or "solr") will *not* get an automated comment 
> on corresponding JIRA issues.  All others continue to.  ASF infra got this 
> done for us: https://issues.apache.org/jira/browse/INFRA-11198
>
> I recommend you start a branch with "solr" or "SOLR" if you are going to work 
> on a Solr issue.  This way if you merge in changes from master, you won't 
> spam the related issues with comments.
>
> ~ David
>
>
> On Fri, Nov 2, 2018 at 7:46 AM Gus Heck  wrote:
>>
>> I'm curious about the branch naming here. I notice this is jira/ and there 
>> are several other such heads in the repository. What's the convention or 
>> significance here for this jira/ prefix?
>>
>> On Fri, Nov 2, 2018 at 6:12 AM  wrote:
>>>
>>> Repository: lucene-solr
>>> Updated Branches:
>>>   refs/heads/jira/gradle c9cb4fe96 -> 4a12fffb7
>>>
>>>
>>> Parallel running tests
>>>
>>>
>>> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
>>> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/4a12fffb
>>> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/4a12fffb
>>> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/4a12fffb
>>>
>>> Branch: refs/heads/jira/gradle
>>> Commit: 4a12fffb751078c2dfdf427617dd5ed9c52c7378
>>> Parents: c9cb4fe
>>> Author: Cao Manh Dat 
>>> Authored: Fri Nov 2 10:11:47 2018 +
>>> Committer: Cao Manh Dat 
>>> Committed: Fri Nov 2 10:11:47 2018 +
>>>
>>> --
>>>  build.gradle | 6 +-
>>>  1 file changed, 5 insertions(+), 1 deletion(-)
>>> --
>>>
>>>
>>> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/4a12fffb/build.gradle
>>> --
>>> diff --git a/build.gradle b/build.gradle
>>> index df21ce8..27a351d 100644
>>> --- a/build.gradle
>>> +++ b/build.gradle
>>> @@ -30,6 +30,10 @@ subprojects {
>>> systemProperty 'java.security.egd', 
>>> 'file:/dev/./urandom'
>>> }
>>> }
>>> +   tasks.withType(Test) {
>>> +   maxParallelForks = Runtime.runtime.availableProcessors() / 2
>>> +   }
>>> +
>>>  }
>>>
>>>  // These versions are defined here because they represent
>>> @@ -308,4 +312,4 @@ ext.library = [
>>> xz: "org.tukaani:xz:1.8",
>>> morfologik_ukrainian_search: 
>>> "ua.net.nlp:morfologik-ukrainian-search:3.9.0",
>>> xercesImpl: "xerces:xercesImpl:2.9.1"
>>> -]
>>> \ No newline at end of file
>>> +]
>>>
>>
>>
>> --
>> http://www.the111shift.com
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Tim Allison as a Lucene/Solr committer

2018-11-02 Thread Tim Allison
Thank you, Erick!  And, thank you, team!

A bit about me...

I've been working in natural language processing since 2002. Over the
last 5+ years, I've focused on advanced search and content/metadata
extraction. Many years ago, David Smiley inspired me "to quit forking
and start patching" and, generally, to grow up and participate in open
source communities.  I'm now the chair/V.P. of Apache Tika, a
committer and PMC member on Apache POI and Apache PDFBox, and I'm a
member of the ASF.  I am passionate about relevance
engineering/evaluation, and more generally, about testing and
evaluation to protect our codebases from my code[1][2].  In a former
life, I was a professor of Latin and Ancient Greek[3].

I am so very grateful to receive this honor.

Thank you!

Cheers,

Tim

[1] 
http://openpreservation.org/blog/2016/10/04/apache-tikas-regression-corpus-tika-1302
[2] https://www.youtube.com/playlist?list=PLbzoR-pLrL6pLDCyPxByWQwYTL-JrF5Rp
[3] 
https://books.google.com/books/about/Aeschylean_stylistics.html?id=0wweAQAAMAAJ
On Fri, Nov 2, 2018 at 12:20 PM Erick Erickson  wrote:
>
> Hi all,
>
> Please join me in welcoming Tim Allison as the latest Lucene/Solr committer!
>
> Congratulations and Welcome, Tim!
>
> It's traditional for you to introduce yourself with a brief bio.
>
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Gus Heck as Lucene/Solr committer

2018-11-02 Thread Shalin Shekhar Mangar
Congratulations and welcome Gus!

On Thu, Nov 1, 2018 at 5:52 PM David Smiley 
wrote:

> Hi all,
>
> Please join me in welcoming Gus Heck as the latest Lucene/Solr committer!
>
> Congratulations and Welcome, Gus!
>
> Gus, it's traditional for you to introduce yourself with a brief bio.
>
> ~ David
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


-- 
Regards,
Shalin Shekhar Mangar.


  1   2   >