[jira] [Resolved] (SOLR-12601) Refactor the autoscaling package to improve readability

2018-08-01 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-12601.
---
   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

> Refactor the autoscaling package to improve readability
> ---
>
> Key: SOLR-12601
> URL: https://issues.apache.org/jira/browse/SOLR-12601
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: master (8.0), 7.5
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12592) Support the cores:'EQUAL' in autoscaling policies

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566364#comment-16566364
 ] 

ASF subversion and git services commented on SOLR-12592:


Commit 600c15d14e73274d4152e8ef1b8c0d0aae69fd18 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=600c15d ]

SOLR-12592: support #EQUAL function in cores in autoscaling policies


> Support the cores:'EQUAL' in autoscaling policies
> -
>
> Key: SOLR-12592
> URL: https://issues.apache.org/jira/browse/SOLR-12592
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> It's possible to  sort nodes according to cores and the system can normally 
> prefer nodes with fewer cores when placing new replicas. However, it may not 
> give suggestions to move off replicas , if the system  is already in an 
> unbalanced state.
> The following rule may help achieve such a balanced distribution of cores
> {code}
> {"cores":"#EQUAL" , node: "#ANY"}
> {code}
> The value of cores is computed as {{total_cores/total_nodes}}. for e.g: if 
> there are 28 cores in total and there are 5 nodes . the value of cores= 28/5 
> = 5.6. This means a node may have either 5 cores or 6 cores.
> It's possible that this may cause a conflict with other collection-specific 
> rules such as
> {code}
> {"replica":"#EQUAL" , "node" : "#ANY"}
> {code}
> It can be remedied by making this rule , not strict.
> {code}
> {"cores":"#EQUAL" , "node": "#ANY", "strict"=false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12592) Support the cores:'EQUAL' in autoscaling policies

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566363#comment-16566363
 ] 

ASF subversion and git services commented on SOLR-12592:


Commit 868e970816d8bb52f138a1181416438c348c750e in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=868e970 ]

SOLR-12592: support #EQUAL function in cores in autoscaling policies


> Support the cores:'EQUAL' in autoscaling policies
> -
>
> Key: SOLR-12592
> URL: https://issues.apache.org/jira/browse/SOLR-12592
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> It's possible to  sort nodes according to cores and the system can normally 
> prefer nodes with fewer cores when placing new replicas. However, it may not 
> give suggestions to move off replicas , if the system  is already in an 
> unbalanced state.
> The following rule may help achieve such a balanced distribution of cores
> {code}
> {"cores":"#EQUAL" , node: "#ANY"}
> {code}
> The value of cores is computed as {{total_cores/total_nodes}}. for e.g: if 
> there are 28 cores in total and there are 5 nodes . the value of cores= 28/5 
> = 5.6. This means a node may have either 5 cores or 6 cores.
> It's possible that this may cause a conflict with other collection-specific 
> rules such as
> {code}
> {"replica":"#EQUAL" , "node" : "#ANY"}
> {code}
> It can be remedied by making this rule , not strict.
> {code}
> {"cores":"#EQUAL" , "node": "#ANY", "strict"=false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk-10) - Build # 72 - Still Unstable!

2018-08-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/72/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.ShardSplitTest.test

Error Message:
Wrong doc count on shard1_0. See SOLR-5309 expected:<305> but was:<308>

Stack Trace:
java.lang.AssertionError: Wrong doc count on shard1_0. See SOLR-5309 
expected:<305> but was:<308>
at 
__randomizedtesting.SeedInfo.seed([A76322DB01101B85:2F371D01AFEC767D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:1010)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:793)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.test(ShardSplitTest.java:109)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-12607) Investigate ShardSplitTest failures

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566305#comment-16566305
 ] 

ASF subversion and git services commented on SOLR-12607:


Commit c31194e445a883b09d205c5d679ddd88022d19c4 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c31194e ]

SOLR-12607: Fixed two separate bugs in shard splits which can cause data loss. 
The first case is when using TLOG replicas only, the updates forwarded from 
parent shard leader to the sub-shard leader are written only in tlog and not 
the index. If this happens after the buffered updates have been replayed then 
the updates can never be executed even though they remain the transaction log. 
The second case is when synchronously forwarding updates to sub-shard leader 
fails and the underlying errors are not propagated to the client

(cherry picked from commit 259bc2baf7ce58aa0143fa6a8d43da417506cd63)


> Investigate ShardSplitTest failures
> ---
>
> Key: SOLR-12607
> URL: https://issues.apache.org/jira/browse/SOLR-12607
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> There have been many recent ShardSplitTest failures. 
> According to http://fucit.org/solr-jenkins-reports/failure-report.html
> {code}
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: testSplitWithChaosMonkey
> Failures: 72.32% (81 / 112)
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: test
> Failures: 26.79% (30 / 112)
> {code} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12607) Investigate ShardSplitTest failures

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566281#comment-16566281
 ] 

ASF subversion and git services commented on SOLR-12607:


Commit 259bc2baf7ce58aa0143fa6a8d43da417506cd63 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=259bc2b ]

SOLR-12607: Fixed two separate bugs in shard splits which can cause data loss. 
The first case is when using TLOG replicas only, the updates forwarded from 
parent shard leader to the sub-shard leader are written only in tlog and not 
the index. If this happens after the buffered updates have been replayed then 
the updates can never be executed even though they remain the transaction log. 
The second case is when synchronously forwarding updates to sub-shard leader 
fails and the underlying errors are not propagated to the client


> Investigate ShardSplitTest failures
> ---
>
> Key: SOLR-12607
> URL: https://issues.apache.org/jira/browse/SOLR-12607
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> There have been many recent ShardSplitTest failures. 
> According to http://fucit.org/solr-jenkins-reports/failure-report.html
> {code}
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: testSplitWithChaosMonkey
> Failures: 72.32% (81 / 112)
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: test
> Failures: 26.79% (30 / 112)
> {code} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8435) Add new LatLonShapePolygonQuery

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566269#comment-16566269
 ] 

ASF subversion and git services commented on LUCENE-8435:
-

Commit d85defbedc54814f01dfc99cc275b563df0cfa3d in lucene-solr's branch 
refs/heads/branch_7x from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d85defb ]

LUCENE-8435: Add new LatLonShapePolygonQuery for querying indexed LatLonShape 
fields by arbitrary polygons


> Add new LatLonShapePolygonQuery 
> 
>
> Key: LUCENE-8435
> URL: https://issues.apache.org/jira/browse/LUCENE-8435
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8435.patch, LUCENE-8435.patch, LUCENE-8435.patch
>
>
> This feature will provide the ability to query indexed {{LatLonShape}} fields 
> with an arbitrary polygon. Initial implementation will support {{INTERSECT}} 
> queries only and future enhancements will add other relations (e.g., 
> {{CONTAINS}}, {{WITHIN}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2466 - Unstable!

2018-08-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2466/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.cdcr.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1902>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1902>
at 
__randomizedtesting.SeedInfo.seed([D7614FFF0D149393:32404A6EA422068]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.cdcr.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:296)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14337 

[jira] [Created] (LUCENE-8442) testPendingDeleteDVGeneration fails with NoSuchFileException

2018-08-01 Thread Nhat Nguyen (JIRA)
Nhat Nguyen created LUCENE-8442:
---

 Summary: testPendingDeleteDVGeneration fails with 
NoSuchFileException
 Key: LUCENE-8442
 URL: https://issues.apache.org/jira/browse/LUCENE-8442
 Project: Lucene - Core
  Issue Type: Test
Affects Versions: master (8.0), 7.5
Reporter: Nhat Nguyen


{noformat}
reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testPendingDeleteDVGeneration -Dtests.seed=EAD8920740472544 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hu-HU 
-Dtests.timezone=Europe/Nicosia -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.05s J3 | TestIndexWriter.testPendingDeleteDVGeneration <<<
   [junit4]> Throwable #1: java.nio.file.NoSuchFileException: 
/var/lib/jenkins/workspace/apache+lucene-solr+master/lucene/build/core/test/J3/temp/lucene.index.TestIndexWriter_EAD8920740472544-001/tempDir-001/_2.fdx
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([EAD8920740472544:6D2DCD1227F98DC4]:0)
   [junit4]>at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
   [junit4]>at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
   [junit4]>at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
   [junit4]>at 
sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
   [junit4]>at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
   [junit4]>at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
   [junit4]>at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
   [junit4]>at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:240)
   [junit4]>at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
   [junit4]>at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:240)
   [junit4]>at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
   [junit4]>at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
   [junit4]>at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:240)
   [junit4]>at java.nio.file.Files.newByteChannel(Files.java:361)
   [junit4]>at java.nio.file.Files.newByteChannel(Files.java:407)
   [junit4]>at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:77)
   [junit4]>at 
org.apache.lucene.index.TestIndexWriter.testPendingDeleteDVGeneration(TestIndexWriter.java:2701)
   [junit4]>at java.lang.Thread.run(Thread.java:748){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12344) SolrSlf4jReporter doesn't set MDC context

2018-08-01 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566195#comment-16566195
 ] 

Lucene/Solr QA commented on SOLR-12344:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m  4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m  4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m  4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 44s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.cdcr.CdcrBidirectionalTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12344 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933963/SOLR-12344.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-130-generic #156~14.04.1-Ubuntu SMP Thu 
Jun 14 13:51:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 64573c1 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/156/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/156/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/156/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> SolrSlf4jReporter doesn't set MDC context
> -
>
> Key: SOLR-12344
> URL: https://issues.apache.org/jira/browse/SOLR-12344
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-12344.patch
>
>
> I setup a slf4j reporter like this on master
> solr.xml
> {code:java}
> 
>class="org.apache.solr.metrics.reporters.SolrSlf4jReporter">
> 1
> UPDATE./update.requestTimes
> update_logger
>   
> {code}
> log4j2.xml
> {code:java}
> 
> 
> 
>   
> 
>   
> 
>   %-4r [%t] %-5p %c %x [%X{collection} %X{shard} %X{replica} 
> %X{core}] %c; %m%n
> 
>   
> 
>  name="RollingFile"
> fileName="${sys:solr.log.dir}/solr.log"
> filePattern="${sys:solr.log.dir}/solr.log.%i" >
>   
> 
>   %-5p - %d{-MM-dd HH:mm:ss.SSS}; [%X{collection} %X{shard} 
> %X{replica} %X{core}] %c; %m%n
> 
>   
>   
> 
> 
>   
>   
> 
>  name="RollingMetricFile"
> fileName="${sys:solr.log.dir}/solr_metric.log"
> filePattern="${sys:solr.log.dir}/solr_metric.log.%i" >
>   
> 
>   %-5p - %d{-MM-dd HH:mm:ss.SSS}; [%X{collection} %X{shard} 
> %X{replica} %X{core}] %c; %m%n
> 
>   
>   
> 
> 
>   
>   
> 
>   
>   
> 
> 
> 
> 
>   
> 
> 
>   
>   
> 
>   
> 
> {code}
> The output I get from the solr_metric.log file is like this
> {code:java}
> INFO  - 2018-05-11 15:31:16.009; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, 

[GitHub] lucene-solr pull request #429: Accept any key in cluster properties

2018-08-01 Thread jefferyyuan
GitHub user jefferyyuan opened a pull request:

https://github.com/apache/lucene-solr/pull/429

Accept any key in cluster properties



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jefferyyuan/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/429.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #429


commit bdaa0db2f53e583029af646eb7863e4d4433bd3d
Author: yyuan2 
Date:   2018-08-02T00:45:44Z

Accept any key in cluster properties




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12612) Accept any key in cluster properties

2018-08-01 Thread jefferyyuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jefferyyuan updated SOLR-12612:
---
Description: 
Cluster properties is a good place to store configuration data that's shared in 
the whole cluster: solr and other (authorized) apps can easily read and update 
them.

 

It would be very useful if we can store extra data in cluster properties which 
would act as a centralized property management system between solr and its 
related apps (like manager or monitor apps).

 

And the change would be also very simple.

We can also require all extra property starts with prefix like: extra_

 

PR: https://github.com/apache/lucene-solr/pull/429

 

 

  was:
Cluster properties is a good place to store configuration data that's shared in 
the whole cluster: solr and other (authorized) apps can easily read and update 
them.

 

It would be very useful if we can store extra data in cluster properties which 
would act as a centralized property management system between solr and its 
related apps (like manager or monitor apps).

 

And the change would be also very simple.

We can also require all extra property starts with prefix like: extra_

 

 


> Accept any key in cluster properties
> 
>
> Key: SOLR-12612
> URL: https://issues.apache.org/jira/browse/SOLR-12612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: jefferyyuan
>Priority: Minor
> Fix For: master (8.0)
>
>
> Cluster properties is a good place to store configuration data that's shared 
> in the whole cluster: solr and other (authorized) apps can easily read and 
> update them.
>  
> It would be very useful if we can store extra data in cluster properties 
> which would act as a centralized property management system between solr and 
> its related apps (like manager or monitor apps).
>  
> And the change would be also very simple.
> We can also require all extra property starts with prefix like: extra_
>  
> PR: https://github.com/apache/lucene-solr/pull/429
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12612) Accept any key in cluster properties

2018-08-01 Thread jefferyyuan (JIRA)
jefferyyuan created SOLR-12612:
--

 Summary: Accept any key in cluster properties
 Key: SOLR-12612
 URL: https://issues.apache.org/jira/browse/SOLR-12612
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.4, master (8.0)
Reporter: jefferyyuan
 Fix For: master (8.0)


Cluster properties is a good place to store configuration data that's shared in 
the whole cluster: solr and other (authorized) apps can easily read and update 
them.

 

It would be very useful if we can store extra data in cluster properties which 
would act as a centralized property management system between solr and its 
related apps (like manager or monitor apps).

 

And the change would be also very simple.

We can also require all extra property starts with prefix like: extra_

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8440) Add support for indexing and searching Line and Point shapes using LatLonShape encoding

2018-08-01 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566161#comment-16566161
 ] 

Lucene/Solr QA commented on LUCENE-8440:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m  
4s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} sandbox in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8440 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933950/LUCENE-8440.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-130-generic #156~14.04.1-Ubuntu SMP Thu 
Jun 14 13:51:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 64573c1 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/60/testReport/ |
| modules | C: lucene lucene/core lucene/sandbox U: lucene |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/60/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Add support for indexing and searching Line and Point shapes using 
> LatLonShape encoding
> ---
>
> Key: LUCENE-8440
> URL: https://issues.apache.org/jira/browse/LUCENE-8440
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8440.patch, LUCENE-8440.patch
>
>
> This feature adds support to {{LatLonShape}} for indexing {{Line}} and 
> {{latitude, longitude}} Point types using the 6 dimension Triangle encoding 
> in {{LatLonShape}}. Indexed points and lines will be searchable using 
> {{LatLonShapeBoundingBoxQuery}} and the new {{LatLonShapePolygonQuery}} in 
> LUCENE-8435.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 724 - Still Unstable

2018-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/724/

1 tests failed.
FAILED:  
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud

Error Message:
Could not find collection : legacyFalse

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : legacyFalse
at 
__randomizedtesting.SeedInfo.seed([CD94A67BAC008C6D:1C9354FE080F075F]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:256)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.checkMandatoryProps(LegacyCloudClusterPropTest.java:154)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.createAndTest(LegacyCloudClusterPropTest.java:91)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud(LegacyCloudClusterPropTest.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)

[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-01 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207060505
  
--- Diff: solr/core/src/java/org/apache/solr/core/ConfigSetService.java ---
@@ -212,12 +213,12 @@ public SchemaCaching(SolrResourceLoader loader, Path 
configSetBase) {
   super(loader, configSetBase);
 }
 
-public static final DateTimeFormatter cacheKeyFormatter = 
DateTimeFormat.forPattern("MMddHHmmss");
+public static final DateTimeFormatter cacheKeyFormatter = 
DateTimeFormatter.ofPattern("MMddHHmmss");
 
 public static String cacheName(Path schemaFile) throws IOException {
   long lastModified = Files.getLastModifiedTime(schemaFile).toMillis();
   return String.format(Locale.ROOT, "%s:%s",
-schemaFile.toString(), 
cacheKeyFormatter.print(lastModified));
+schemaFile.toString(), 
Instant.ofEpochMilli(lastModified).atZone(ZoneId.systemDefault()).format(cacheKeyFormatter));
--- End diff --

I just switched the zoneoffset to UTC


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12572) Reuse fieldvalues computed while sorting at writing in ExportWriter

2018-08-01 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566117#comment-16566117
 ] 

Varun Thacker edited comment on SOLR-12572 at 8/1/18 10:48 PM:
---

Hi Amrit,

I've taken your last patch and cleaned it up a little. I think the patch is 
looking in good shape. 

Will run some tests for correctness on this patch later today. Let's capture 
some perf numbers

I'm thinking of indexing 25M docs with just "id" and then executing this query 
: {{q=match_all=id desc=id}}  with and without the patch. This will 
test when a field is reused in sort and fl how much speed improvement do we 
get.  If the numbers look good I'd imagine more sort and fl fields will bring 
larger improvements. 


was (Author: varunthacker):
Hi Amrit,

I've taken your last patch and cleaned it up a little. I think the patch is 
looking in good shape. 

Will run some tests for correctness on this patch later today. Let's capture 
some perf numbers

I'm thinking of indexing 25M docs with just "id" and then executing this query 
: {{q=match_all=id desc=id}}  with and without the patch. This will 
test us when a field is reused in sort and fl how much speed improvement do we 
get.  If the numbers look good I'd imagine more sort and fl fields will bring 
more improvements. 

> Reuse fieldvalues computed while sorting at writing in ExportWriter
> ---
>
> Key: SOLR-12572
> URL: https://issues.apache.org/jira/browse/SOLR-12572
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12572.patch, SOLR-12572.patch, SOLR-12572.patch, 
> SOLR-12572.patch, SOLR-12572.patch, SOLR-12572.patch
>
>
> While exporting result through "/export" handler,
> {code:java}
> http://localhost:8983/solr/core_name/export?q=my-query=severity+desc,timestamp+desc=severity,timestamp,msg
> {code}
> Doc-values are sought for all the {{sort}} fields defined (in this example 
> 'severity, 'timestamp'). When we stream out docs we again make doc-value 
> seeks against the {{fl}} fields ('severity','timestamp','msg') . 
> In most common use-cases we have {{fl = sort}} fields, or atleast the sort 
> fields are subset of {{fl}} fields, so if we can *pre-collect* the values 
> while sorting it, we can reduce the doc-value seeks potentially bringing 
> *speed improvement*.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12572) Reuse fieldvalues computed while sorting at writing in ExportWriter

2018-08-01 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566117#comment-16566117
 ] 

Varun Thacker commented on SOLR-12572:
--

Hi Amrit,

I've taken your last patch and cleaned it up a little. I think the patch is 
looking in good shape. 

Will run some tests for correctness on this patch later today. Let's capture 
some perf numbers

I'm thinking of indexing 25M docs with just "id" and then executing this query 
: {{q=match_all=id desc=id}}  with and without the patch. This will 
test us when a field is reused in sort and fl how much speed improvement do we 
get.  If the numbers look good I'd imagine more sort and fl fields will bring 
more improvements. 

> Reuse fieldvalues computed while sorting at writing in ExportWriter
> ---
>
> Key: SOLR-12572
> URL: https://issues.apache.org/jira/browse/SOLR-12572
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12572.patch, SOLR-12572.patch, SOLR-12572.patch, 
> SOLR-12572.patch, SOLR-12572.patch, SOLR-12572.patch
>
>
> While exporting result through "/export" handler,
> {code:java}
> http://localhost:8983/solr/core_name/export?q=my-query=severity+desc,timestamp+desc=severity,timestamp,msg
> {code}
> Doc-values are sought for all the {{sort}} fields defined (in this example 
> 'severity, 'timestamp'). When we stream out docs we again make doc-value 
> seeks against the {{fl}} fields ('severity','timestamp','msg') . 
> In most common use-cases we have {{fl = sort}} fields, or atleast the sort 
> fields are subset of {{fl}} fields, so if we can *pre-collect* the values 
> while sorting it, we can reduce the doc-value seeks potentially bringing 
> *speed improvement*.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12572) Reuse fieldvalues computed while sorting at writing in ExportWriter

2018-08-01 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12572:
-
Attachment: SOLR-12572.patch

> Reuse fieldvalues computed while sorting at writing in ExportWriter
> ---
>
> Key: SOLR-12572
> URL: https://issues.apache.org/jira/browse/SOLR-12572
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12572.patch, SOLR-12572.patch, SOLR-12572.patch, 
> SOLR-12572.patch, SOLR-12572.patch, SOLR-12572.patch
>
>
> While exporting result through "/export" handler,
> {code:java}
> http://localhost:8983/solr/core_name/export?q=my-query=severity+desc,timestamp+desc=severity,timestamp,msg
> {code}
> Doc-values are sought for all the {{sort}} fields defined (in this example 
> 'severity, 'timestamp'). When we stream out docs we again make doc-value 
> seeks against the {{fl}} fields ('severity','timestamp','msg') . 
> In most common use-cases we have {{fl = sort}} fields, or atleast the sort 
> fields are subset of {{fl}} fields, so if we can *pre-collect* the values 
> while sorting it, we can reduce the doc-value seeks potentially bringing 
> *speed improvement*.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 113 - Still Unstable

2018-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/113/

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testNodeLost

Error Message:
did not finish processing all events in time: started=5, finished=4

Stack Trace:
java.lang.AssertionError: did not finish processing all events in time: 
started=5, finished=4
at 
__randomizedtesting.SeedInfo.seed([F8EA4327309875AB:47FF8DD9B372102D]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.doTestNodeLost(TestLargeCluster.java:525)
at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testNodeLost(TestLargeCluster.java:378)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
We think that split was successful but sub-shard states were not updated even 
after 2 minutes.

Stack 

[jira] [Comment Edited] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566036#comment-16566036
 ] 

Jan Høydahl edited comment on SOLR-8207 at 8/1/18 9:47 PM:
---

Starting on the RefGuide work for this patch.
 * I think there is consensus to rename "Cloud" tab as "Cluster". Shout out if 
we should wait until later for this.
 ** I am not renaming the URL, e.g. it will continue to have the link 
[http://localhost:8983/solr/#/~cloud]
 ** Should we rename refguide file {{cloud-screens.adoc}} as well, or just give 
it a new title "Cluster screens", and fix other places in the guide mentioning 
it by its old name?
 ** Are there instructions somewhere to produce the cluster state needed to 
replicate the "Graph" and "Tree" screenshots now that the tab name changes?
 ** Now that we call the tab "Cluster", would it make more sense to move the 
"Cluster Suggestions" tab (autoscaling) from top-level to a sub tab of 
"Cluster"?
 * Also consensus to remove the "Radial" graph. I'll remove that screenshot 
from the guide and any other mention.

First draft of refguide patch attached.


was (Author: janhoy):
Starting on the RefGuide work for this patch.
 * I think there is consensus to rename "Cloud" tab as "Cluster". Shout out if 
we should wait until later for this.
 ** I am not renaming the URL, e.g. it will continue to have the link 
[http://localhost:8983/solr/#/~cloud]
 ** Should we rename refguide file {{cloud-screens.adoc}} as well, or just give 
it a new title "Cluster screens", and fix other places in the guide mentioning 
it by its old name?
 ** Are there instructions somewhere to produce the cluster state needed to 
replicate the "Graph" and "Tree" screenshots now that the tab name changes?
 ** Now that we call the tab "Cluster", would it make more sense to move the 
"Cluster Suggestions" tab (autoscaling) from top-level to a sub tab of 
"Cluster"?
 * Also consensus to remove the "Radial" graph. I'll remove that screenshot 
from the guide and any other mention.

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-8207-refguide.patch, node-compact.png, 
> node-details.png, node-hostcolumn.png, node-toggle-row-numdocs.png, 
> nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-01 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8207:
--
Attachment: SOLR-8207-refguide.patch

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-8207-refguide.patch, node-compact.png, 
> node-details.png, node-hostcolumn.png, node-toggle-row-numdocs.png, 
> nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566036#comment-16566036
 ] 

Jan Høydahl commented on SOLR-8207:
---

Starting on the RefGuide work for this patch.
 * I think there is consensus to rename "Cloud" tab as "Cluster". Shout out if 
we should wait until later for this.
 ** I am not renaming the URL, e.g. it will continue to have the link 
[http://localhost:8983/solr/#/~cloud]
 ** Should we rename refguide file {{cloud-screens.adoc}} as well, or just give 
it a new title "Cluster screens", and fix other places in the guide mentioning 
it by its old name?
 ** Are there instructions somewhere to produce the cluster state needed to 
replicate the "Graph" and "Tree" screenshots now that the tab name changes?
 ** Now that we call the tab "Cluster", would it make more sense to move the 
"Cluster Suggestions" tab (autoscaling) from top-level to a sub tab of 
"Cluster"?
 * Also consensus to remove the "Radial" graph. I'll remove that screenshot 
from the guide and any other mention.

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-8207-refguide.patch, node-compact.png, 
> node-details.png, node-hostcolumn.png, node-toggle-row-numdocs.png, 
> nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+24) - Build # 22578 - Still Unstable!

2018-08-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22578/
Java: 64bit/jdk-11-ea+24 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

45 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([65A314E6E4E82E6D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([65A314E6E4E82E6D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (SOLR-12611) Add version information to thread dump

2018-08-01 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565979#comment-16565979
 ] 

Shawn Heisey commented on SOLR-12611:
-

Maybe I should combine the two approaches.  A boolean to control exit, and the 
wait() call.  Deal with any possible spurious problems -- interrupts or 
wakeups.  Also avoid any inefficiencies from lots of short sleeps.

Testing this now, will put up a new patch if I see signs of success.

> Add version information to thread dump
> --
>
> Key: SOLR-12611
> URL: https://issues.apache.org/jira/browse/SOLR-12611
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Priority: Trivial
> Attachments: SOLR-12611.patch
>
>
> Thread dumps contain stacktrace info.  Without knowing the Solr version, it 
> can be difficult to compare stacktraces to source code.
> If exact version information is available in the thread dump, it will be 
> possible to look at source code to understand stacktrace information.  If 
> *full* version information is present, then it would even be possible to 
> learn whether the user is running an official binary build or if they have 
> built Solr themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8435) Add new LatLonShapePolygonQuery

2018-08-01 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565980#comment-16565980
 ] 

Lucene/Solr QA commented on LUCENE-8435:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} LUCENE-8435 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/lucene-java/HowToContribute#Contributing_your_work for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8435 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933939/LUCENE-8435.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/59/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Add new LatLonShapePolygonQuery 
> 
>
> Key: LUCENE-8435
> URL: https://issues.apache.org/jira/browse/LUCENE-8435
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8435.patch, LUCENE-8435.patch, LUCENE-8435.patch
>
>
> This feature will provide the ability to query indexed {{LatLonShape}} fields 
> with an arbitrary polygon. Initial implementation will support {{INTERSECT}} 
> queries only and future enhancements will add other relations (e.g., 
> {{CONTAINS}}, {{WITHIN}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12572) Reuse fieldvalues computed while sorting at writing in ExportWriter

2018-08-01 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12572:

Attachment: SOLR-12572.patch

> Reuse fieldvalues computed while sorting at writing in ExportWriter
> ---
>
> Key: SOLR-12572
> URL: https://issues.apache.org/jira/browse/SOLR-12572
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12572.patch, SOLR-12572.patch, SOLR-12572.patch, 
> SOLR-12572.patch, SOLR-12572.patch
>
>
> While exporting result through "/export" handler,
> {code:java}
> http://localhost:8983/solr/core_name/export?q=my-query=severity+desc,timestamp+desc=severity,timestamp,msg
> {code}
> Doc-values are sought for all the {{sort}} fields defined (in this example 
> 'severity, 'timestamp'). When we stream out docs we again make doc-value 
> seeks against the {{fl}} fields ('severity','timestamp','msg') . 
> In most common use-cases we have {{fl = sort}} fields, or atleast the sort 
> fields are subset of {{fl}} fields, so if we can *pre-collect* the values 
> while sorting it, we can reduce the doc-value seeks potentially bringing 
> *speed improvement*.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565951#comment-16565951
 ] 

Jan Høydahl commented on SOLR-8207:
---

Thanks for the feedback. I intend to commit to master asap and then get it into 
7.5.

If anyone have time to look at the code in {{AdminHandlersProxy}}, especially 
security aspects, that would be great. Here's an outline of the logic, is it 
water proof?
 # If the {{'nodes'}} parameter is not present in a call to systemInfo and 
metrics handler, then the logic is exactly as before.
 # If {{'nodes'}} param is there, then {{AdminHandlersProxy}} code is executed, 
parsing nodes string as comma separated list of nodeNames
 # If any nodeName is malformed, we throw an exception. Also if one of the node 
names does not exist in live_nodes from zk, we exit
 # Then the request is fanned-out by AdminHandlersProxy to all nodes in the 
list and returned in a combined response by Admin UI.
 # There's no upper-bound on the number of nodes that can be requested at a 
time, but typically it will be 10, only the ones rendered per page. If 
{{nodes=all}} is specified, then all live_nodes are consulted. Would it make 
sense to limit the number of nodes in some way? There is a 10s timeout for each 
request, and the worst ting that could happen in a system with huge number of 
nodes is that thins take too much time or times out.

I also like feedback on the approach for parallell sub-queries to all the nodes 
in a loop using Futures. See method {{AdminHandlersProxy#callRemoteNode}} which 
will construct a new SolrClient per sub request:
{code:java}
HttpSolrClient solr = new HttpSolrClient.Builder(baseUrl.toString()).build();
{code}
There is no way to inject an arbitrary URL in there from the API. I tested with 
basic Auth enabled and it seemed to work, indicating that the sub requests use 
PKI authentication or something? Anything that looks shaky?

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12611) Add version information to thread dump

2018-08-01 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565935#comment-16565935
 ] 

Shawn Heisey edited comment on SOLR-12611 at 8/1/18 8:31 PM:
-

Thanks for looking, [~dweiss]!

I'm not well-versed in how to properly handle this kind of code.  I think your 
change would be easy, but if you could detail it, that would be appreciated.

I did have a different idea that I considered, which might be even more 
bulletproof, but it's probably less efficient.  It would involve an 
AtomicBoolean object and this code in the thread:

{code:java}
  while (versionThreadRun.get()) {
try {
  Thread.sleep(100);
} catch (InterruptedException e) {
}
  }
{code}

I figure there are two criteria to satisfy with the "do nothing" 
implementation, and I'm not sure what the best options are: 1) Code must be 
bulletproof. 2) Code must use as little CPU and memory as possible.



was (Author: elyograg):
Thanks for looking, [~dweiss]!

I'm not well-versed in how to properly handle this kind of code.  Your change 
would be easy.

I did have a different idea that I considered, which might be even more 
bulletproof, but it's probably less efficient.  It would involve an 
AtomicBoolean object and this code in the thread:

{code:java}
  while (versionThreadRun.get()) {
try {
  Thread.sleep(100);
} catch (InterruptedException e) {
}
  }
{code}

I figure there are two criteria to satisfy with the "do nothing" 
implementation, and I'm not sure what the best options are: 1) Code must be 
bulletproof. 2) Code must use as little CPU and memory as possible.


> Add version information to thread dump
> --
>
> Key: SOLR-12611
> URL: https://issues.apache.org/jira/browse/SOLR-12611
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Priority: Trivial
> Attachments: SOLR-12611.patch
>
>
> Thread dumps contain stacktrace info.  Without knowing the Solr version, it 
> can be difficult to compare stacktraces to source code.
> If exact version information is available in the thread dump, it will be 
> possible to look at source code to understand stacktrace information.  If 
> *full* version information is present, then it would even be possible to 
> learn whether the user is running an official binary build or if they have 
> built Solr themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12611) Add version information to thread dump

2018-08-01 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565935#comment-16565935
 ] 

Shawn Heisey commented on SOLR-12611:
-

Thanks for looking, [~dweiss]!

I'm not well-versed in how to properly handle this kind of code.  Your change 
would be easy.

I did have a different idea that I considered, which might be even more 
bulletproof, but it's probably less efficient.  It would involve an 
AtomicBoolean object and this code in the thread:

{code:java}
  while (versionThreadRun.get()) {
try {
  Thread.sleep(100);
} catch (InterruptedException e) {
}
  }
{code}

I figure there are two criteria to satisfy with the "do nothing" 
implementation, and I'm not sure what the best options are: 1) Code must be 
bulletproof. 2) Code must use as little CPU and memory as possible.


> Add version information to thread dump
> --
>
> Key: SOLR-12611
> URL: https://issues.apache.org/jira/browse/SOLR-12611
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Priority: Trivial
> Attachments: SOLR-12611.patch
>
>
> Thread dumps contain stacktrace info.  Without knowing the Solr version, it 
> can be difficult to compare stacktraces to source code.
> If exact version information is available in the thread dump, it will be 
> possible to look at source code to understand stacktrace information.  If 
> *full* version information is present, then it would even be possible to 
> learn whether the user is running an official binary build or if they have 
> built Solr themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-01 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207019625
  
--- Diff: solr/core/src/java/org/apache/solr/core/ConfigSetService.java ---
@@ -212,12 +213,12 @@ public SchemaCaching(SolrResourceLoader loader, Path 
configSetBase) {
   super(loader, configSetBase);
 }
 
-public static final DateTimeFormatter cacheKeyFormatter = 
DateTimeFormat.forPattern("MMddHHmmss");
+public static final DateTimeFormatter cacheKeyFormatter = 
DateTimeFormatter.ofPattern("MMddHHmmss");
 
 public static String cacheName(Path schemaFile) throws IOException {
   long lastModified = Files.getLastModifiedTime(schemaFile).toMillis();
   return String.format(Locale.ROOT, "%s:%s",
-schemaFile.toString(), 
cacheKeyFormatter.print(lastModified));
+schemaFile.toString(), 
Instant.ofEpochMilli(lastModified).atZone(ZoneId.systemDefault()).format(cacheKeyFormatter));
--- End diff --

Sure thing,
I'll fix it ASAP.
Um I think this was my mistake I simply overlooked that. Shouldn't cause 
any problems, as long as the addschema test passes 


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-01 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/428#discussion_r207018765
  
--- Diff: solr/core/src/java/org/apache/solr/core/ConfigSetService.java ---
@@ -212,12 +213,12 @@ public SchemaCaching(SolrResourceLoader loader, Path 
configSetBase) {
   super(loader, configSetBase);
 }
 
-public static final DateTimeFormatter cacheKeyFormatter = 
DateTimeFormat.forPattern("MMddHHmmss");
+public static final DateTimeFormatter cacheKeyFormatter = 
DateTimeFormatter.ofPattern("MMddHHmmss");
 
 public static String cacheName(Path schemaFile) throws IOException {
   long lastModified = Files.getLastModifiedTime(schemaFile).toMillis();
   return String.format(Locale.ROOT, "%s:%s",
-schemaFile.toString(), 
cacheKeyFormatter.print(lastModified));
+schemaFile.toString(), 
Instant.ofEpochMilli(lastModified).atZone(ZoneId.systemDefault()).format(cacheKeyFormatter));
--- End diff --

I don't think we should be using the system default time zone.  Is that 
what it was doing?  Any ramifications you can think of by simply switching to 
UTC here?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12611) Add version information to thread dump

2018-08-01 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565893#comment-16565893
 ] 

Dawid Weiss commented on SOLR-12611:


That wait() isn't right -- the contract for wait() allows spurious wakeups, so 
it should be 
{code}
while (true) { obj.wait(); }
{code}
with an interrupted exception handler either inside breaking out of the loop or 
outside, falling-through.

> Add version information to thread dump
> --
>
> Key: SOLR-12611
> URL: https://issues.apache.org/jira/browse/SOLR-12611
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Priority: Trivial
> Attachments: SOLR-12611.patch
>
>
> Thread dumps contain stacktrace info.  Without knowing the Solr version, it 
> can be difficult to compare stacktraces to source code.
> If exact version information is available in the thread dump, it will be 
> possible to look at source code to understand stacktrace information.  If 
> *full* version information is present, then it would even be possible to 
> learn whether the user is running an official binary build or if they have 
> built Solr themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12611) Add version information to thread dump

2018-08-01 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565890#comment-16565890
 ] 

Shawn Heisey commented on SOLR-12611:
-

Attached patch against branch_7x that creates a thread on startup.  The thread 
is given a name that includes full version information.  The thread is set up 
to wait on synchronization, which I figure is probably the most efficient way 
for a thread to do absolutely nothing.  If there's a better option, let me 
know.  At Solr shutdown, the thread is sent an interrupt, which breaks it out 
of its synchronization wait and allows the thread to end.

> Add version information to thread dump
> --
>
> Key: SOLR-12611
> URL: https://issues.apache.org/jira/browse/SOLR-12611
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Priority: Trivial
> Attachments: SOLR-12611.patch
>
>
> Thread dumps contain stacktrace info.  Without knowing the Solr version, it 
> can be difficult to compare stacktraces to source code.
> If exact version information is available in the thread dump, it will be 
> possible to look at source code to understand stacktrace information.  If 
> *full* version information is present, then it would even be possible to 
> learn whether the user is running an official binary build or if they have 
> built Solr themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12611) Add version information to thread dump

2018-08-01 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12611:

Attachment: SOLR-12611.patch

> Add version information to thread dump
> --
>
> Key: SOLR-12611
> URL: https://issues.apache.org/jira/browse/SOLR-12611
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Priority: Trivial
> Attachments: SOLR-12611.patch
>
>
> Thread dumps contain stacktrace info.  Without knowing the Solr version, it 
> can be difficult to compare stacktraces to source code.
> If exact version information is available in the thread dump, it will be 
> possible to look at source code to understand stacktrace information.  If 
> *full* version information is present, then it would even be possible to 
> learn whether the user is running an official binary build or if they have 
> built Solr themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12611) Add version information to thread dump

2018-08-01 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-12611:
---

 Summary: Add version information to thread dump
 Key: SOLR-12611
 URL: https://issues.apache.org/jira/browse/SOLR-12611
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.4
Reporter: Shawn Heisey


Thread dumps contain stacktrace info.  Without knowing the Solr version, it can 
be difficult to compare stacktraces to source code.

If exact version information is available in the thread dump, it will be 
possible to look at source code to understand stacktrace information.  If 
*full* version information is present, then it would even be possible to learn 
whether the user is running an official binary build or if they have built Solr 
themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12344) SolrSlf4jReporter doesn't set MDC context

2018-08-01 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565877#comment-16565877
 ] 

Andrzej Bialecki  commented on SOLR-12344:
--

This patch adds support for MDC logging to all reporters that need it - 
subclasses of {{SolrMetricReporter}} can obtain the current values of MDC 
context (including properties such as core, node, shard, replica, etc) in their 
{{init()}} method.

> SolrSlf4jReporter doesn't set MDC context
> -
>
> Key: SOLR-12344
> URL: https://issues.apache.org/jira/browse/SOLR-12344
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-12344.patch
>
>
> I setup a slf4j reporter like this on master
> solr.xml
> {code:java}
> 
>class="org.apache.solr.metrics.reporters.SolrSlf4jReporter">
> 1
> UPDATE./update.requestTimes
> update_logger
>   
> {code}
> log4j2.xml
> {code:java}
> 
> 
> 
>   
> 
>   
> 
>   %-4r [%t] %-5p %c %x [%X{collection} %X{shard} %X{replica} 
> %X{core}] %c; %m%n
> 
>   
> 
>  name="RollingFile"
> fileName="${sys:solr.log.dir}/solr.log"
> filePattern="${sys:solr.log.dir}/solr.log.%i" >
>   
> 
>   %-5p - %d{-MM-dd HH:mm:ss.SSS}; [%X{collection} %X{shard} 
> %X{replica} %X{core}] %c; %m%n
> 
>   
>   
> 
> 
>   
>   
> 
>  name="RollingMetricFile"
> fileName="${sys:solr.log.dir}/solr_metric.log"
> filePattern="${sys:solr.log.dir}/solr_metric.log.%i" >
>   
> 
>   %-5p - %d{-MM-dd HH:mm:ss.SSS}; [%X{collection} %X{shard} 
> %X{replica} %X{core}] %c; %m%n
> 
>   
>   
> 
> 
>   
>   
> 
>   
>   
> 
> 
> 
> 
>   
> 
> 
>   
>   
> 
>   
> 
> {code}
> The output I get from the solr_metric.log file is like this
> {code:java}
> INFO  - 2018-05-11 15:31:16.009; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0, 
> mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second, 
> duration_unit=milliseconds
> INFO  - 2018-05-11 15:31:17.010; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0, 
> mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second, 
> duration_unit=milliseconds
> INFO  - 2018-05-11 15:31:18.010; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0, 
> mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second, 
> duration_unit=milliseconds{code}
> On a JVM which has multiple cores, this will become impossible to tell where 
> it's coming from if MDC context is not set



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8312) Leverage impacts for SynonymQuery

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565875#comment-16565875
 ] 

ASF subversion and git services commented on LUCENE-8312:
-

Commit 64573c142c851741da50f8858c9d630557a151d0 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=64573c1 ]

LUCENE-8312: Fixed performance regression with non-scoring term queries.


> Leverage impacts for SynonymQuery
> -
>
> Key: LUCENE-8312
> URL: https://issues.apache.org/jira/browse/LUCENE-8312
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8312.patch, LUCENE-8312.patch
>
>
> Now that we expose raw impacts, we could leverage them for synonym queries.
> It would be a matter of summing up term frequencies for each unique norm 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Combine SolrDocumentFetcher and RetrieveFieldsOptimizer?

2018-08-01 Thread David Smiley
It makes sense to me!  RetrieveFieldsOptimizer feels like more of an
internal algorithm class that should not have been exposed outside of
SolrDocumentFetcher.  I said similarly here:
https://issues.apache.org/jira/browse/SOLR-8344?focusedCommentId=16165102=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16165102
Although Dat in the next comment seemed to disagree.  Well it's a bit
apples 'n oranges... I thought perhaps it could be some static methods and
Dat disagreed with that... but even if we have the class, it needn't be so
public (i.e. code using SolrDocumentFetcher needn't know it exists)

On Tue, Jul 31, 2018 at 1:08 PM Erick Erickson 
wrote:

> We have SolrDocumentFetcher and RetrieveFieldsOptimizer. The
> relationship between the two is unclear at first glance. Using
> SolrDocumentFetcher by itself is (or can be) inefficient.
>
> WDYT about combining the two? Is there a good reason you would want to
> use SolrDocumentFetcher _instead_ of RetrieveFieldsOptimizer?
>
> Ideally I'd want to be able to write code like:
>
> solrDocumentFetcher.fillDocValuesMostEfficiently
>
> That created an optimizer and "did the right thing".
>
> Is this desirable/possible? Suggestions? Should I raise an improvement
> JIRA?
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (SOLR-12344) SolrSlf4jReporter doesn't set MDC context

2018-08-01 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12344:
-
Attachment: SOLR-12344.patch

> SolrSlf4jReporter doesn't set MDC context
> -
>
> Key: SOLR-12344
> URL: https://issues.apache.org/jira/browse/SOLR-12344
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-12344.patch
>
>
> I setup a slf4j reporter like this on master
> solr.xml
> {code:java}
> 
>class="org.apache.solr.metrics.reporters.SolrSlf4jReporter">
> 1
> UPDATE./update.requestTimes
> update_logger
>   
> {code}
> log4j2.xml
> {code:java}
> 
> 
> 
>   
> 
>   
> 
>   %-4r [%t] %-5p %c %x [%X{collection} %X{shard} %X{replica} 
> %X{core}] %c; %m%n
> 
>   
> 
>  name="RollingFile"
> fileName="${sys:solr.log.dir}/solr.log"
> filePattern="${sys:solr.log.dir}/solr.log.%i" >
>   
> 
>   %-5p - %d{-MM-dd HH:mm:ss.SSS}; [%X{collection} %X{shard} 
> %X{replica} %X{core}] %c; %m%n
> 
>   
>   
> 
> 
>   
>   
> 
>  name="RollingMetricFile"
> fileName="${sys:solr.log.dir}/solr_metric.log"
> filePattern="${sys:solr.log.dir}/solr_metric.log.%i" >
>   
> 
>   %-5p - %d{-MM-dd HH:mm:ss.SSS}; [%X{collection} %X{shard} 
> %X{replica} %X{core}] %c; %m%n
> 
>   
>   
> 
> 
>   
>   
> 
>   
>   
> 
> 
> 
> 
>   
> 
> 
>   
>   
> 
>   
> 
> {code}
> The output I get from the solr_metric.log file is like this
> {code:java}
> INFO  - 2018-05-11 15:31:16.009; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0, 
> mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second, 
> duration_unit=milliseconds
> INFO  - 2018-05-11 15:31:17.010; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0, 
> mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second, 
> duration_unit=milliseconds
> INFO  - 2018-05-11 15:31:18.010; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0, 
> mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second, 
> duration_unit=milliseconds{code}
> On a JVM which has multiple cores, this will become impossible to tell where 
> it's coming from if MDC context is not set



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12105) Support Inplace updates in cdcr

2018-08-01 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12105:

Description: Inplace updates are not forwarded to target clusters as of 
today. It would be nice addition.  (was: Inplace updates are not forwarded to 
target clusters as of today. Add the support.)

> Support Inplace updates in cdcr
> ---
>
> Key: SOLR-12105
> URL: https://issues.apache.org/jira/browse/SOLR-12105
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Priority: Minor
>
> Inplace updates are not forwarded to target clusters as of today. It would be 
> nice addition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11718) Deprecate CDCR Buffer APIs and set Buffer to "false"

2018-08-01 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11718:

Summary: Deprecate CDCR Buffer APIs and set Buffer to "false"  (was: 
Deprecate CDCR Buffer APIs)

> Deprecate CDCR Buffer APIs and set Buffer to "false"
> 
>
> Key: SOLR-11718
> URL: https://issues.apache.org/jira/browse/SOLR-11718
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11718.patch, SOLR-11718.patch
>
>
> Kindly see the discussion on SOLR-11652.
> Today, if we see the current CDCR documentation page, buffering is "disabled" 
> by default in both source and target. We don't see any purpose served by Cdcr 
> buffering and it is quite an overhead considering it can take a lot heap 
> space (tlogs ptr) and forever retention of tlogs on the disk when enabled. 
> Also today, even if we disable buffer from API on source , considering it was 
> enabled at startup, tlogs are never purged on leader node of shards of 
> source, refer jira: SOLR-11652



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8435) Add new LatLonShapePolygonQuery

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565828#comment-16565828
 ] 

ASF subversion and git services commented on LUCENE-8435:
-

Commit 18c2300fd61c369b87ce01b6201b95a53f89e115 in lucene-solr's branch 
refs/heads/master from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=18c2300 ]

LUCENE-8435: Add new LatLonShapePolygonQuery for querying indexed LatLonShape 
fields by arbitrary polygons


> Add new LatLonShapePolygonQuery 
> 
>
> Key: LUCENE-8435
> URL: https://issues.apache.org/jira/browse/LUCENE-8435
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8435.patch, LUCENE-8435.patch, LUCENE-8435.patch
>
>
> This feature will provide the ability to query indexed {{LatLonShape}} fields 
> with an arbitrary polygon. Initial implementation will support {{INTERSECT}} 
> queries only and future enhancements will add other relations (e.g., 
> {{CONTAINS}}, {{WITHIN}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 2464 - Still Unstable!

2018-08-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2464/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexTooManyDocs

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([484C7B295405B9F3]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexTooManyDocs

Error Message:
Captured an uncaught exception in thread: Thread[id=2606, name=Thread-2416, 
state=RUNNABLE, group=TGRP-TestIndexTooManyDocs]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2606, name=Thread-2416, state=RUNNABLE, 
group=TGRP-TestIndexTooManyDocs]
Caused by: java.lang.AssertionError: only modifications from the current 
flushing queue are permitted while doing a full flush
at __randomizedtesting.SeedInfo.seed([484C7B295405B9F3]:0)
at 
org.apache.lucene.index.DocumentsWriter.assertTicketQueueModification(DocumentsWriter.java:683)
at 
org.apache.lucene.index.DocumentsWriter.applyAllDeletes(DocumentsWriter.java:187)
at 
org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:411)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:514)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1601)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1593)
at 
org.apache.lucene.index.TestIndexTooManyDocs.lambda$testIndexTooManyDocs$0(TestIndexTooManyDocs.java:70)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.lucene.index.TestIndexTooManyDocs.testIndexTooManyDocs

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([484C7B295405B9F3]:0)


FAILED:  org.apache.solr.cloud.TestWithCollection.testMoveReplicaWithCollection

Error Message:
Expected moving a replica of 'withCollection': 
testMoveReplicaWithCollection_abc to fail

Stack Trace:
java.lang.AssertionError: Expected moving a replica of 'withCollection': 
testMoveReplicaWithCollection_abc to fail
at 
__randomizedtesting.SeedInfo.seed([4BBF67C0EA44BC07:4A85E6379367571]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.TestWithCollection.testMoveReplicaWithCollection(TestWithCollection.java:389)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 

[jira] [Updated] (SOLR-12524) CdcrBidirectionalTest.testBiDir() regularly fails

2018-08-01 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12524:

Attachment: SOLR-12524.patch

> CdcrBidirectionalTest.testBiDir() regularly fails
> -
>
> Key: SOLR-12524
> URL: https://issues.apache.org/jira/browse/SOLR-12524
> Project: Solr
>  Issue Type: Test
>  Components: CDCR, Tests
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-12524.patch, SOLR-12524.patch, SOLR-12524.patch
>
>
> e.g. from 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4701/consoleText
> {code}
> [junit4] ERROR   20.4s J0 | CdcrBidirectionalTest.testBiDir <<<
> [junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=28371, 
> name=cdcr-replicator-11775-thread-1, state=RUNNABLE, 
> group=TGRP-CdcrBidirectionalTest]
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50:8F8E744E68278112]:0)
> [junit4]> Caused by: java.lang.AssertionError
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50]:0)
> [junit4]> at 
> org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
> [junit4]> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [junit4]> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8435) Add new LatLonShapePolygonQuery

2018-08-01 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565800#comment-16565800
 ] 

Nicholas Knize commented on LUCENE-8435:


Awesome. Good idea about the geojson testing. We can add it once LUCENE-8440 
lands. Then it will be super convenient to have full geojson support for all 
shapes.

> Add new LatLonShapePolygonQuery 
> 
>
> Key: LUCENE-8435
> URL: https://issues.apache.org/jira/browse/LUCENE-8435
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8435.patch, LUCENE-8435.patch, LUCENE-8435.patch
>
>
> This feature will provide the ability to query indexed {{LatLonShape}} fields 
> with an arbitrary polygon. Initial implementation will support {{INTERSECT}} 
> queries only and future enhancements will add other relations (e.g., 
> {{CONTAINS}}, {{WITHIN}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8435) Add new LatLonShapePolygonQuery

2018-08-01 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565762#comment-16565762
 ] 

Robert Muir commented on LUCENE-8435:
-

Thank you, +1 to commit.

As a followup we may want to test the to-geo-json and from-geo-json better. now 
that we have to-geo-json maybe we can test round-tripping and stuff like that 
but at least it would be good to know we output valid stuff. that can just be a 
followup jira issue.

> Add new LatLonShapePolygonQuery 
> 
>
> Key: LUCENE-8435
> URL: https://issues.apache.org/jira/browse/LUCENE-8435
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8435.patch, LUCENE-8435.patch, LUCENE-8435.patch
>
>
> This feature will provide the ability to query indexed {{LatLonShape}} fields 
> with an arbitrary polygon. Initial implementation will support {{INTERSECT}} 
> queries only and future enhancements will add other relations (e.g., 
> {{CONTAINS}}, {{WITHIN}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8440) Add support for indexing and searching Line and Point shapes using LatLonShape encoding

2018-08-01 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565722#comment-16565722
 ] 

Nicholas Knize commented on LUCENE-8440:


Thanks for the review [~jpountz]. Great feedback. I went ahead and incorporated 
the changes in the latest patch, updated the javadocs, and cleaned up some of 
the comments.

> Add support for indexing and searching Line and Point shapes using 
> LatLonShape encoding
> ---
>
> Key: LUCENE-8440
> URL: https://issues.apache.org/jira/browse/LUCENE-8440
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8440.patch, LUCENE-8440.patch
>
>
> This feature adds support to {{LatLonShape}} for indexing {{Line}} and 
> {{latitude, longitude}} Point types using the 6 dimension Triangle encoding 
> in {{LatLonShape}}. Indexed points and lines will be searchable using 
> {{LatLonShapeBoundingBoxQuery}} and the new {{LatLonShapePolygonQuery}} in 
> LUCENE-8435.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8440) Add support for indexing and searching Line and Point shapes using LatLonShape encoding

2018-08-01 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-8440:
---
Attachment: LUCENE-8440.patch

> Add support for indexing and searching Line and Point shapes using 
> LatLonShape encoding
> ---
>
> Key: LUCENE-8440
> URL: https://issues.apache.org/jira/browse/LUCENE-8440
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8440.patch, LUCENE-8440.patch
>
>
> This feature adds support to {{LatLonShape}} for indexing {{Line}} and 
> {{latitude, longitude}} Point types using the 6 dimension Triangle encoding 
> in {{LatLonShape}}. Indexed points and lines will be searchable using 
> {{LatLonShapeBoundingBoxQuery}} and the new {{LatLonShapePolygonQuery}} in 
> LUCENE-8435.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2018-08-01 Thread Alexandre Rafalovitch (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565717#comment-16565717
 ] 

Alexandre Rafalovitch commented on LUCENE-2562:
---

I just meant that Lucene does not have a server+HTML+CSS, so any UI addition 
will be a non-trivial discussion. I did not have any deeper insight than that.

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12586) Replace use of Joda Time with Java 8 java.time

2018-08-01 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565713#comment-16565713
 ] 

Bar Rotstein commented on SOLR-12586:
-

I just filed a new pull request.
I really hope I managed to pin down all the config files that needed change.

> Replace use of Joda Time with Java 8 java.time
> --
>
> Key: SOLR-12586
> URL: https://issues.apache.org/jira/browse/SOLR-12586
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We're using Joda Time, a dependency in a couple places.  Now that we are on 
> Java 8, we ought to drop the dependency, using the equivalent java.time 
> package instead.  As I understand it, Joda time more or less was incorporated 
> to Java as java.time with some fairly minor differences.
> Usages:
>  * ConfigSetService
>  * ParseDateFieldUpdateProcessorFactory
> And some related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12208) Don't use "INDEX.sizeInBytes" as a tag name in policy calculations

2018-08-01 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-12208.
--
Resolution: Fixed

This has been fixed a while ago.

> Don't use "INDEX.sizeInBytes" as a tag name in policy calculations
> --
>
> Key: SOLR-12208
> URL: https://issues.apache.org/jira/browse/SOLR-12208
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12208.patch, SOLR-12208.patch, SOLR-12208.patch
>
>
> CORE_IDX and FREEDISK ConditionType reuse this metric name, but they assume 
> the values are expressed in gigabytes. This alone is confusing considering 
> the name of the metric.
> Additionally, it causes conflicts in the simulation framework that would 
> require substantial changes to resolve (ReplicaInfo-s in 
> SimClusterStateProvider keep metric values in their variables, expressed in 
> original units - but then the Policy assumes it can put the values expressed 
> in GB under the same key... hilarity ensues).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12594) MetricsHistoryHandler.getOverseerLeader fails when hostname contains hyphen

2018-08-01 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12594:
-
Fix Version/s: 7.5

> MetricsHistoryHandler.getOverseerLeader fails when hostname contains hyphen
> ---
>
> Key: SOLR-12594
> URL: https://issues.apache.org/jira/browse/SOLR-12594
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.5
>
>
> as reported on the user list...
> {quote}
> We encounter a lot of log warning entries from the MetricsHistoryHandler 
> saying
> o.a.s.h.a.MetricsHistoryHandler Unknown format of leader id, skipping:
> 244550997187166214-server1-b.myhost:8983_solr-n_94
> I don't even know what this _MetricsHistoryHandler_ does, but at least 
> there's a warning.
> Looking at the code you can see that it has to fail if the hostname of the 
> node contains a hyphen:
> {quote}
> {code}
> String[] ids = oid.split("-");
> if (ids.length != 3) { // unknown format
>   log.warn("Unknown format of leader id, skipping: " + oid);
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-12586) Replace use of Joda Time with Java 8 java.time

2018-08-01 Thread mosh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh updated SOLR-12586:

Comment: was deleted

(was: Hey,
just opened a pull request,
fingers crossed that I changed all the needed config files.
Since I'm quite new to Solr I hope I did not make too much of a mess.)

> Replace use of Joda Time with Java 8 java.time
> --
>
> Key: SOLR-12586
> URL: https://issues.apache.org/jira/browse/SOLR-12586
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We're using Joda Time, a dependency in a couple places.  Now that we are on 
> Java 8, we ought to drop the dependency, using the equivalent java.time 
> package instead.  As I understand it, Joda time more or less was incorporated 
> to Java as java.time with some fairly minor differences.
> Usages:
>  * ConfigSetService
>  * ParseDateFieldUpdateProcessorFactory
> And some related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2018-08-01 Thread Dmitry Kan (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565703#comment-16565703
 ] 

Dmitry Kan commented on LUCENE-2562:


[~arafalov] thanks for your input! Can you please elaborate on 'If Luke is 
supposed to be part of Lucene-only distribution, I guess the discussion is a 
bit more complicated' ?

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Priority: Major
>  Labels: gsoc2014
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, 
> Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, 
> luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12586) Replace use of Joda Time with Java 8 java.time

2018-08-01 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565705#comment-16565705
 ] 

mosh commented on SOLR-12586:
-

Hey,
just opened a pull request,
fingers crossed that I changed all the needed config files.
Since I'm quite new to Solr I hope I did not make too much of a mess.

> Replace use of Joda Time with Java 8 java.time
> --
>
> Key: SOLR-12586
> URL: https://issues.apache.org/jira/browse/SOLR-12586
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We're using Joda Time, a dependency in a couple places.  Now that we are on 
> Java 8, we ought to drop the dependency, using the equivalent java.time 
> package instead.  As I understand it, Joda time more or less was incorporated 
> to Java as java.time with some fairly minor differences.
> Usages:
>  * ConfigSetService
>  * ParseDateFieldUpdateProcessorFactory
> And some related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8433) Add FutureArrays.equals(Object[] a, int aToIndex, Object[] b, int bFromIndex, int bToIndex)

2018-08-01 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565699#comment-16565699
 ] 

Adrien Grand commented on LUCENE-8433:
--

Maintaining this fork array has some cost, yet this particular call site is not 
performance sensitive nor would FutureArrays be safer (that I know of) or much 
easier to read, so I'm not convinced we should do it.

> Add FutureArrays.equals(Object[] a, int aToIndex, Object[] b, int bFromIndex, 
> int bToIndex)
> ---
>
> Key: LUCENE-8433
> URL: https://issues.apache.org/jira/browse/LUCENE-8433
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael Braun
>Priority: Trivial
>
> Noticed code like the following in TopFieldCollector:
> {code}
> if (fields1.length > fields2.length) {
>   return false;
> }
> return Arrays.asList(fields1).equals(Arrays.asList(fields2).subList(0, 
> fields1.length));
> {code}
> This can be simplified and made more efficient by using 
> Arrays.equals(Object[] a, int aToIndex, Object[] b, int bFromIndex, int 
> bToIndex) , which is only present in Java 9+. (Though it is not taking 
> advantage of any intrinsics like the primitive arrays do, since it uses 
> object equality rather than reference equality).  This can be added as part 
> of FutureArrays.java - this would serve to simplify code. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #428: SOLR-12586: deprecate joda-time and use java....

2018-08-01 Thread barrotsteindev
GitHub user barrotsteindev opened a pull request:

https://github.com/apache/lucene-solr/pull/428

SOLR-12586: deprecate joda-time and use java.time instead



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/barrotsteindev/lucene-solr 
SOLR-12586-deprecata-joda.time

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/428.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #428


commit 6a76afa38ddf64d3517286b2c33eec3317cff004
Author: Bar Rotstein 
Date:   2018-08-01T17:35:11Z

SOLR-12586: ParsingFieldUpdateProcessor use java.time instead of joda-time

commit 6b9f2c9e033129fed6fa4b5072a15f66992511db
Author: Bar Rotstein 
Date:   2018-08-01T17:39:16Z

SOLR-12586: ConfigSetService use java.time instead of joda-time

commit 7a036f7c3187c1219d0cce288ff629981b6d8479
Author: Bar Rotstein 
Date:   2018-08-01T17:39:51Z

SOLR-12586: remove joda-time from ant dependencies and licenses




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12509) Improve SplitShardCmd performance and reliability

2018-08-01 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-12509.
--
   Resolution: Fixed
Fix Version/s: 7.5

> Improve SplitShardCmd performance and reliability
> -
>
> Key: SOLR-12509
> URL: https://issues.apache.org/jira/browse/SOLR-12509
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12509.patch, SOLR-12509.patch
>
>
> {{SplitShardCmd}} is currently quite complex.
> Shard splitting occurs on active shards, which are still being updated, so 
> the splitting has to involve several carefully orchestrated steps, making 
> sure that new sub-shard placeholders are properly created and visible, and 
> then also applying buffered updates to the split leaders and performing 
> recovery on sub-shard replicas.
> This process could be simplified in cases where collections are not actively 
> being updated or can tolerate a little downtime - we could put the shard 
> "offline", ie. disable writing while the splitting is in progress (in order 
> to avoid users' confusion we should disable writing to the whole collection).
> The actual index splitting could perhaps be improved to use 
> {{HardLinkCopyDirectoryWrapper}} for creating a copy of the index by 
> hard-linking existing index segments, and then applying deletes to the 
> documents that don't belong in a sub-shard. However, the resulting index 
> slices that replicas would have to pull would be the same size as the whole 
> shard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12509) Improve SplitShardCmd performance and reliability

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565690#comment-16565690
 ] 

ASF subversion and git services commented on SOLR-12509:


Commit 7faa803a7c9699f38b8a6b3ddd3a88c4729c5e5f in lucene-solr's branch 
refs/heads/branch_7x from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7faa803 ]

SOLR-12509: Improve SplitShardCmd performance and reliability.


> Improve SplitShardCmd performance and reliability
> -
>
> Key: SOLR-12509
> URL: https://issues.apache.org/jira/browse/SOLR-12509
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-12509.patch, SOLR-12509.patch
>
>
> {{SplitShardCmd}} is currently quite complex.
> Shard splitting occurs on active shards, which are still being updated, so 
> the splitting has to involve several carefully orchestrated steps, making 
> sure that new sub-shard placeholders are properly created and visible, and 
> then also applying buffered updates to the split leaders and performing 
> recovery on sub-shard replicas.
> This process could be simplified in cases where collections are not actively 
> being updated or can tolerate a little downtime - we could put the shard 
> "offline", ie. disable writing while the splitting is in progress (in order 
> to avoid users' confusion we should disable writing to the whole collection).
> The actual index splitting could perhaps be improved to use 
> {{HardLinkCopyDirectoryWrapper}} for creating a copy of the index by 
> hard-linking existing index segments, and then applying deletes to the 
> documents that don't belong in a sub-shard. However, the resulting index 
> slices that replicas would have to pull would be the same size as the whole 
> shard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 723 - Still Unstable

2018-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/723/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestWithCollection.testNodeAdded

Error Message:
Action was not fired till 30 seconds

Stack Trace:
java.lang.AssertionError: Action was not fired till 30 seconds
at 
__randomizedtesting.SeedInfo.seed([6303D7D2D4D55FBB:6C081A57676F7B8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestWithCollection.testNodeAdded(TestWithCollection.java:471)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13555 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestWithCollection
   [junit4]   2> 1868228 INFO  
(SUITE-TestWithCollection-seed#[6303D7D2D4D55FBB]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 118 - Still Unstable

2018-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/118/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/40)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":15740, 
  "node_name":"127.0.0.1:10006_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.4659017324447632E-5,   
"SEARCHER.searcher.numDocs":11}, "core_node4":{   
"core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":15740,   "node_name":"127.0.0.1:10007_solr",  
 "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":1.4659017324447632E-5,   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1533162607690832650", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":17240, 
  "node_name":"127.0.0.1:10006_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.605600118637085E-5,   
"SEARCHER.searcher.numDocs":14}, "core_node2":{   
"core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":17240,   "node_name":"127.0.0.1:10007_solr",  
 "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":1.605600118637085E-5,   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1533162607691642750",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":13740,   "node_name":"127.0.0.1:10007_solr",   
"base_url":"http://127.0.0.1:10007/solr;,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.2796372175216675E-5, 
  "SEARCHER.searcher.numDocs":7}, "core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":13740,   "node_name":"127.0.0.1:10006_solr",   
"base_url":"http://127.0.0.1:10006/solr;,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.2796372175216675E-5, 
  "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{   "parent":"shard1",   
"stateTimestamp":"1533162607691485750",   "range":"8000-bfff",  
 "state":"active",   "replicas":{ "core_node7":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":23980,   "node_name":"127.0.0.1:10006_solr",   
"base_url":"http://127.0.0.1:10006/solr;,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.2333115339279175E-5, 
  "SEARCHER.searcher.numDocs":7}, "core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":23980,   "node_name":"127.0.0.1:10007_solr",   
"base_url":"http://127.0.0.1:10007/solr;,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.2333115339279175E-5, 
  "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/40)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  

Re: Lucene/Solr 8.0

2018-08-01 Thread David Smiley
Yes, that new BKD/Points based code is definitely something we want in 8 or
7.5 -- it's a big deal.  I think it would also be awesome if we had
highlighter that could use the Weight.matches() API -- again for either 7.5
or 8.  I'm working on this on the UnifiedHighlighter front and Alan from
other aspects.
~ David

On Wed, Aug 1, 2018 at 12:51 PM Adrien Grand  wrote:

> I was hoping that we would release some bits of this new support for geo
> shapes in 7.5 already. We are already very close to being able to index
> points, lines and polygons and query for intersection with an envelope. It
> would be nice to add support for other relations (eg. disjoint) and queries
> (eg. polygon) but the current work looks already useful to me.
>
> Le mer. 1 août 2018 à 17:00, Robert Muir  a écrit :
>
>> My only other suggestion is we may want to get Nick's shape stuff into
>> the sandbox module at least for 8.0 so that it can be tested out. I
>> think it looks like that wouldn't delay any October target though?
>>
>> On Wed, Aug 1, 2018 at 9:51 AM, Adrien Grand  wrote:
>> > I'd like to revive this thread now that these new optimizations for
>> > collection of top docs are more usable and enabled by default in
>> > IndexSearcher (https://issues.apache.org/jira/browse/LUCENE-8060). Any
>> > feedback about starting to work towards releasing 8.0 and targeting
>> October
>> > 2018?
>> >
>> >
>> > Le jeu. 21 juin 2018 à 09:31, Adrien Grand  a écrit
>> :
>> >>
>> >> Hi Robert,
>> >>
>> >> I agree we need to make it more usable before 8.0. I would also like to
>> >> improve ReqOptSumScorer (
>> https://issues.apache.org/jira/browse/LUCENE-8204)
>> >> to leverage impacts so that queries that incorporate queries on feature
>> >> fields (https://issues.apache.org/jira/browse/LUCENE-8197) in an
>> optional
>> >> clause are also fast.
>> >>
>> >> Le jeu. 21 juin 2018 à 03:06, Robert Muir  a écrit :
>> >>>
>> >>> How can the end user actually use the biggest new feature: impacts and
>> >>> BMW? As far as I can tell, the issue to actually implement the
>> >>> necessary API changes (IndexSearcher/TopDocs/etc) is still open and
>> >>> unresolved, although there are some interesting ideas on it. This
>> >>> seems like a really big missing piece, without a proper API, the stuff
>> >>> is not really usable. I also can't imagine a situation where the API
>> >>> could be introduced in a followup minor release because it would be
>> >>> too invasive.
>> >>>
>> >>> On Mon, Jun 18, 2018 at 1:19 PM, Adrien Grand 
>> wrote:
>> >>> > Hi all,
>> >>> >
>> >>> > I would like to start discussing releasing Lucene/Solr 8.0. Lucene 8
>> >>> > already
>> >>> > has some good changes around scoring, notably cleanups to
>> >>> > similarities[1][2][3], indexing of impacts[4], and an
>> implementation of
>> >>> > Block-Max WAND[5] which, once combined, allow to run queries faster
>> >>> > when
>> >>> > total hit counts are not requested.
>> >>> >
>> >>> > [1] https://issues.apache.org/jira/browse/LUCENE-8116
>> >>> > [2] https://issues.apache.org/jira/browse/LUCENE-8020
>> >>> > [3] https://issues.apache.org/jira/browse/LUCENE-8007
>> >>> > [4] https://issues.apache.org/jira/browse/LUCENE-4198
>> >>> > [5] https://issues.apache.org/jira/browse/LUCENE-8135
>> >>> >
>> >>> > In terms of bug fixes, there is also a bad relevancy bug[6] which is
>> >>> > only in
>> >>> > 8.0 because it required a breaking change[7] to be implemented.
>> >>> >
>> >>> > [6] https://issues.apache.org/jira/browse/LUCENE-8031
>> >>> > [7] https://issues.apache.org/jira/browse/LUCENE-8134
>> >>> >
>> >>> > As usual, doing a new major release will also help age out old
>> codecs,
>> >>> > which
>> >>> > in-turn make maintenance easier: 8.0 will no longer need to care
>> about
>> >>> > the
>> >>> > fact that some codecs were initially implemented with a
>> random-access
>> >>> > API
>> >>> > for doc values, that pre-7.0 indices encoded norms differently, or
>> that
>> >>> > pre-6.2 indices could not record an index sort.
>> >>> >
>> >>> > I also expect that we will come up with ideas of things to do for
>> 8.0
>> >>> > as we
>> >>> > feel that the next major is getting closer. In terms of planning, I
>> was
>> >>> > thinking that we could target something like october 2018, which
>> would
>> >>> > be
>> >>> > 12-13 months after 7.0 and 3-4 months from now.
>> >>> >
>> >>> > From a Solr perspective, the main change I'm aware of that would be
>> >>> > worth
>> >>> > releasing a new major is the Star Burst effort. Is it something we
>> want
>> >>> > to
>> >>> > get in for 8.0?
>> >>> >
>> >>> > Adrien
>> >>>
>> >>> -
>> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> >>>
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, 

Re: Lucene/Solr 8.0

2018-08-01 Thread Adrien Grand
I was hoping that we would release some bits of this new support for geo
shapes in 7.5 already. We are already very close to being able to index
points, lines and polygons and query for intersection with an envelope. It
would be nice to add support for other relations (eg. disjoint) and queries
(eg. polygon) but the current work looks already useful to me.

Le mer. 1 août 2018 à 17:00, Robert Muir  a écrit :

> My only other suggestion is we may want to get Nick's shape stuff into
> the sandbox module at least for 8.0 so that it can be tested out. I
> think it looks like that wouldn't delay any October target though?
>
> On Wed, Aug 1, 2018 at 9:51 AM, Adrien Grand  wrote:
> > I'd like to revive this thread now that these new optimizations for
> > collection of top docs are more usable and enabled by default in
> > IndexSearcher (https://issues.apache.org/jira/browse/LUCENE-8060). Any
> > feedback about starting to work towards releasing 8.0 and targeting
> October
> > 2018?
> >
> >
> > Le jeu. 21 juin 2018 à 09:31, Adrien Grand  a écrit :
> >>
> >> Hi Robert,
> >>
> >> I agree we need to make it more usable before 8.0. I would also like to
> >> improve ReqOptSumScorer (
> https://issues.apache.org/jira/browse/LUCENE-8204)
> >> to leverage impacts so that queries that incorporate queries on feature
> >> fields (https://issues.apache.org/jira/browse/LUCENE-8197) in an
> optional
> >> clause are also fast.
> >>
> >> Le jeu. 21 juin 2018 à 03:06, Robert Muir  a écrit :
> >>>
> >>> How can the end user actually use the biggest new feature: impacts and
> >>> BMW? As far as I can tell, the issue to actually implement the
> >>> necessary API changes (IndexSearcher/TopDocs/etc) is still open and
> >>> unresolved, although there are some interesting ideas on it. This
> >>> seems like a really big missing piece, without a proper API, the stuff
> >>> is not really usable. I also can't imagine a situation where the API
> >>> could be introduced in a followup minor release because it would be
> >>> too invasive.
> >>>
> >>> On Mon, Jun 18, 2018 at 1:19 PM, Adrien Grand 
> wrote:
> >>> > Hi all,
> >>> >
> >>> > I would like to start discussing releasing Lucene/Solr 8.0. Lucene 8
> >>> > already
> >>> > has some good changes around scoring, notably cleanups to
> >>> > similarities[1][2][3], indexing of impacts[4], and an implementation
> of
> >>> > Block-Max WAND[5] which, once combined, allow to run queries faster
> >>> > when
> >>> > total hit counts are not requested.
> >>> >
> >>> > [1] https://issues.apache.org/jira/browse/LUCENE-8116
> >>> > [2] https://issues.apache.org/jira/browse/LUCENE-8020
> >>> > [3] https://issues.apache.org/jira/browse/LUCENE-8007
> >>> > [4] https://issues.apache.org/jira/browse/LUCENE-4198
> >>> > [5] https://issues.apache.org/jira/browse/LUCENE-8135
> >>> >
> >>> > In terms of bug fixes, there is also a bad relevancy bug[6] which is
> >>> > only in
> >>> > 8.0 because it required a breaking change[7] to be implemented.
> >>> >
> >>> > [6] https://issues.apache.org/jira/browse/LUCENE-8031
> >>> > [7] https://issues.apache.org/jira/browse/LUCENE-8134
> >>> >
> >>> > As usual, doing a new major release will also help age out old
> codecs,
> >>> > which
> >>> > in-turn make maintenance easier: 8.0 will no longer need to care
> about
> >>> > the
> >>> > fact that some codecs were initially implemented with a random-access
> >>> > API
> >>> > for doc values, that pre-7.0 indices encoded norms differently, or
> that
> >>> > pre-6.2 indices could not record an index sort.
> >>> >
> >>> > I also expect that we will come up with ideas of things to do for 8.0
> >>> > as we
> >>> > feel that the next major is getting closer. In terms of planning, I
> was
> >>> > thinking that we could target something like october 2018, which
> would
> >>> > be
> >>> > 12-13 months after 7.0 and 3-4 months from now.
> >>> >
> >>> > From a Solr perspective, the main change I'm aware of that would be
> >>> > worth
> >>> > releasing a new major is the Star Burst effort. Is it something we
> want
> >>> > to
> >>> > get in for 8.0?
> >>> >
> >>> > Adrien
> >>>
> >>> -
> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-8441) Wrong index sort field type throws unexpected NullPointerException

2018-08-01 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565612#comment-16565612
 ] 

Michael McCandless commented on LUCENE-8441:


{quote}The check is done only once per DV field (when the doc value type is not 
set) so it should be ok in terms of performance ?
{quote}
Aha!  You are right!  Perfect.  Thanks for fixing so quickly [~jim.ferenczi].

> Wrong index sort field type throws unexpected NullPointerException
> --
>
> Key: LUCENE-8441
> URL: https://issues.apache.org/jira/browse/LUCENE-8441
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: LUCENE-8441.patch, LUCENE-8441.patch
>
>
> I came across this scary exception if you pass the wrong {{SortField.Type}} 
> for a field; I'll attach patch w/ small test case:
> {noformat}
> 1) testWrongSortFieldType(org.apache.lucene.index.TestIndexSorting)
> java.lang.NullPointerException
> at 
> __randomizedtesting.SeedInfo.seed([995FF58C7B184E8F:B0CC507647B2ED95]:0)
> at 
> org.apache.lucene.index.SortingTermVectorsConsumer.abort(SortingTermVectorsConsumer.java:87)
> at org.apache.lucene.index.TermsHash.abort(TermsHash.java:68)
> at 
> org.apache.lucene.index.DefaultIndexingChain.abort(DefaultIndexingChain.java:332)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.abort(DocumentsWriterPerThread.java:138)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.maybeAbort(DocumentsWriterPerThread.java:532)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:524)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:554)
> at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:719)
> at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3201)
> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3446)
> at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3411)
> at 
> org.apache.lucene.index.TestIndexSorting.testWrongSortFieldType(TestIndexSorting.java:2489)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8441) Wrong index sort field type throws unexpected NullPointerException

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565594#comment-16565594
 ] 

ASF subversion and git services commented on LUCENE-8441:
-

Commit 276a851b0549af6fdd8a80e86592df7a812338de in lucene-solr's branch 
refs/heads/branch_7x from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=276a851 ]

LUCENE-8441: IndexWriter now checks doc value type of index sort fields and 
fails the document if they are not compatible.


> Wrong index sort field type throws unexpected NullPointerException
> --
>
> Key: LUCENE-8441
> URL: https://issues.apache.org/jira/browse/LUCENE-8441
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: LUCENE-8441.patch, LUCENE-8441.patch
>
>
> I came across this scary exception if you pass the wrong {{SortField.Type}} 
> for a field; I'll attach patch w/ small test case:
> {noformat}
> 1) testWrongSortFieldType(org.apache.lucene.index.TestIndexSorting)
> java.lang.NullPointerException
> at 
> __randomizedtesting.SeedInfo.seed([995FF58C7B184E8F:B0CC507647B2ED95]:0)
> at 
> org.apache.lucene.index.SortingTermVectorsConsumer.abort(SortingTermVectorsConsumer.java:87)
> at org.apache.lucene.index.TermsHash.abort(TermsHash.java:68)
> at 
> org.apache.lucene.index.DefaultIndexingChain.abort(DefaultIndexingChain.java:332)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.abort(DocumentsWriterPerThread.java:138)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.maybeAbort(DocumentsWriterPerThread.java:532)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:524)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:554)
> at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:719)
> at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3201)
> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3446)
> at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3411)
> at 
> org.apache.lucene.index.TestIndexSorting.testWrongSortFieldType(TestIndexSorting.java:2489)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8441) Wrong index sort field type throws unexpected NullPointerException

2018-08-01 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi resolved LUCENE-8441.
--
   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

> Wrong index sort field type throws unexpected NullPointerException
> --
>
> Key: LUCENE-8441
> URL: https://issues.apache.org/jira/browse/LUCENE-8441
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: LUCENE-8441.patch, LUCENE-8441.patch
>
>
> I came across this scary exception if you pass the wrong {{SortField.Type}} 
> for a field; I'll attach patch w/ small test case:
> {noformat}
> 1) testWrongSortFieldType(org.apache.lucene.index.TestIndexSorting)
> java.lang.NullPointerException
> at 
> __randomizedtesting.SeedInfo.seed([995FF58C7B184E8F:B0CC507647B2ED95]:0)
> at 
> org.apache.lucene.index.SortingTermVectorsConsumer.abort(SortingTermVectorsConsumer.java:87)
> at org.apache.lucene.index.TermsHash.abort(TermsHash.java:68)
> at 
> org.apache.lucene.index.DefaultIndexingChain.abort(DefaultIndexingChain.java:332)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.abort(DocumentsWriterPerThread.java:138)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.maybeAbort(DocumentsWriterPerThread.java:532)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:524)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:554)
> at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:719)
> at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3201)
> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3446)
> at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3411)
> at 
> org.apache.lucene.index.TestIndexSorting.testWrongSortFieldType(TestIndexSorting.java:2489)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12607) Investigate ShardSplitTest failures

2018-08-01 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565591#comment-16565591
 ] 

Erick Erickson commented on SOLR-12607:
---

OK, started some up. I'll report what I find if anything.


> Investigate ShardSplitTest failures
> ---
>
> Key: SOLR-12607
> URL: https://issues.apache.org/jira/browse/SOLR-12607
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> There have been many recent ShardSplitTest failures. 
> According to http://fucit.org/solr-jenkins-reports/failure-report.html
> {code}
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: testSplitWithChaosMonkey
> Failures: 72.32% (81 / 112)
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: test
> Failures: 26.79% (30 / 112)
> {code} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8441) Wrong index sort field type throws unexpected NullPointerException

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565584#comment-16565584
 ] 

ASF subversion and git services commented on LUCENE-8441:
-

Commit 679b4aa71d205ac58621f6b2bad64637f6bd7d67 in lucene-solr's branch 
refs/heads/master from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=679b4aa ]

LUCENE-8441: IndexWriter now checks doc value type of index sort fields and 
fails the document if they are not compatible.


> Wrong index sort field type throws unexpected NullPointerException
> --
>
> Key: LUCENE-8441
> URL: https://issues.apache.org/jira/browse/LUCENE-8441
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Attachments: LUCENE-8441.patch, LUCENE-8441.patch
>
>
> I came across this scary exception if you pass the wrong {{SortField.Type}} 
> for a field; I'll attach patch w/ small test case:
> {noformat}
> 1) testWrongSortFieldType(org.apache.lucene.index.TestIndexSorting)
> java.lang.NullPointerException
> at 
> __randomizedtesting.SeedInfo.seed([995FF58C7B184E8F:B0CC507647B2ED95]:0)
> at 
> org.apache.lucene.index.SortingTermVectorsConsumer.abort(SortingTermVectorsConsumer.java:87)
> at org.apache.lucene.index.TermsHash.abort(TermsHash.java:68)
> at 
> org.apache.lucene.index.DefaultIndexingChain.abort(DefaultIndexingChain.java:332)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.abort(DocumentsWriterPerThread.java:138)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.maybeAbort(DocumentsWriterPerThread.java:532)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:524)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:554)
> at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:719)
> at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3201)
> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3446)
> at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3411)
> at 
> org.apache.lucene.index.TestIndexSorting.testWrongSortFieldType(TestIndexSorting.java:2489)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8441) Wrong index sort field type throws unexpected NullPointerException

2018-08-01 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565574#comment-16565574
 ] 

Jim Ferenczi commented on LUCENE-8441:
--

The check is done only once per DV field (when the doc value type is not set) 
so it should be ok in terms of performance ?


> Wrong index sort field type throws unexpected NullPointerException
> --
>
> Key: LUCENE-8441
> URL: https://issues.apache.org/jira/browse/LUCENE-8441
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Attachments: LUCENE-8441.patch, LUCENE-8441.patch
>
>
> I came across this scary exception if you pass the wrong {{SortField.Type}} 
> for a field; I'll attach patch w/ small test case:
> {noformat}
> 1) testWrongSortFieldType(org.apache.lucene.index.TestIndexSorting)
> java.lang.NullPointerException
> at 
> __randomizedtesting.SeedInfo.seed([995FF58C7B184E8F:B0CC507647B2ED95]:0)
> at 
> org.apache.lucene.index.SortingTermVectorsConsumer.abort(SortingTermVectorsConsumer.java:87)
> at org.apache.lucene.index.TermsHash.abort(TermsHash.java:68)
> at 
> org.apache.lucene.index.DefaultIndexingChain.abort(DefaultIndexingChain.java:332)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.abort(DocumentsWriterPerThread.java:138)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.maybeAbort(DocumentsWriterPerThread.java:532)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:524)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:554)
> at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:719)
> at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3201)
> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3446)
> at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3411)
> at 
> org.apache.lucene.index.TestIndexSorting.testWrongSortFieldType(TestIndexSorting.java:2489)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BadApple report. Seems like I'm wasting my time.

2018-08-01 Thread Mark Miller
I still think it’s a mistake to try and use all the Jenkins results to
drive ignoring tests. It needs to be an objective measure in a good env.

We also should not be ignoring tests in mass.l without individual
consideration. Critical test coverage should be treated differently than
any random test, especially when stability is sometimes simple to achieve
for that test.

A decade+ of history says it’s unlikely you get much consistent help
digging out of a huge test ignore hell.

Beasting in a known good environment and a few very interested parties is
the only path out of this if you ask me. We need to get clean in a known
good env and then automate beasting defense, using Jenkins to find issues
in other environments.

Unfortunately, not something I can help out with in the short term anymore.

Mark
On Wed, Aug 1, 2018 at 8:10 AM Erick Erickson 
wrote:

> Alexandre:
>
> Feel free! What I'm struggling with is not that someone checked in
> some code that all the sudden started breaking things. Rather that a
> test that's been working perfectly will fail once the won't
> reproducibly fail again and does _not_ appear to be related to recent
> code changes.
>
> In fact that's the crux of the matter, it's difficult/impossible to
> tell at a glance when a test fails whether it is or is not related to
> a recent code change.
>
> Erick
>
> On Wed, Aug 1, 2018 at 8:05 AM, Alexandre Rafalovitch
>  wrote:
> > Just a completely random thought that I do not have deep knowledge for
> > (still learning my way around Solr tests).
> >
> > Is this something that Machine Learning could help with? The Github
> > repo/history is a fantastic source of learning on who worked on which
> > file, how often, etc. We certainly should be able to get some 'most
> > significant developer' stats out of that.
> >
> > Regards,
> >Alex.
> >
> > On 1 August 2018 at 10:56, Erick Erickson 
> wrote:
> >> Shawn:
> >>
> >> Trouble is there were 945 tests that failed at least once in the last
> >> 4 weeks. And the trend is all over the map on a weekly basis.
> >>
> >> e-mail-2018-06-11.txt: There were 989 unannotated tests that failed
> >> e-mail-2018-06-18.txt: There were 689 unannotated tests that failed
> >> e-mail-2018-06-25.txt: There were 555 unannotated tests that failed
> >> e-mail-2018-07-02.txt: There were 723 unannotated tests that failed
> >> e-mail-2018-07-09.txt: There were 793 unannotated tests that failed
> >> e-mail-2018-07-16.txt: There were 809 unannotated tests that failed
> >> e-mail-2018-07-23.txt: There were 953 unannotated tests that failed
> >> e-mail-2018-07-30.txt: There were 945 unannotated tests that failed
> >>
> >> I'm BadApple'ing tests that fail every week for the last 4 weeks on
> >> the theory that those are not temporary issues (hey, we all commit
> >> code that breaks something then have to figure out why and fix).
> >>
> >> I also have the feeling that somewhere, somehow, our test framework is
> >> making some assumptions that are invalid. Or too strict. Or too fast.
> >> Or there's some fundamental issue with some of our classes. Or... The
> >> number of sporadic issues where the Object Tracker spits stuff out for
> >> instance screams that some assumption we're making, either in the code
> >> or in the test framework is flawed.
> >>
> >> What I don't know is how to make visible progress. It's discouraging
> >> to fix something and then next week have more tests fail for unrelated
> >> reasons.
> >>
> >> Visibility is the issue to me. We have no good way of saying "these
> >> tests _just started failing for a reason. As a quick experiment, I
> >> extended the triage to 10 weeks (no attempt to ascertain if these
> >> tests even existed 10 weeks ago). Here are the tests that have _only_
> >> failed in the last week, not the previous 9. BadApple'ing anything
> >> that's only failed once seems overkill
> >>
> >> Although the test that failed 77 times does just stand out
> >>
> >> week pctruns  failstest
> >> 00.2  460  1
> >> CloudSolrClientTest.testVersionsAreReturned
> >> 00.2  466  1
> >> ComputePlanActionTest.testSelectedCollections
> >> 00.2  464  1
> >> ConfusionMatrixGeneratorTest.testGetConfusionMatrixWithBM25NB
> >> 08.1   37  3  IndexSizeTriggerTest(suite)
> >> 00.2  454  1
> MBeansHandlerTest.testAddedMBeanDiff
> >> 00.2  454  1  MBeansHandlerTest.testDiff
> >> 00.2  455  1  MetricTriggerTest.test
> >> 00.2  455  1  MetricsHandlerTest.test
> >> 00.2  455  1  MetricsHandlerTest.testKeyMetrics
> >> 00.2  453  1  RequestHandlersTest.testInitCount
> >> 00.2  453  1  RequestHandlersTest.testStatistics
> >> 00.2  453  1
> ScheduledTriggerIntegrationTest(suite)
> >> 00.2  451  1
> 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+24) - Build # 22577 - Still Unstable!

2018-08-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22577/
Java: 64bit/jdk-11-ea+24 -XX:-UseCompressedOops -XX:+UseParallelGC

45 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.ltr.TestLTROnSolrCloud

Error Message:
43 threads leaked from SUITE scope at org.apache.solr.ltr.TestLTROnSolrCloud:   
  1) Thread[id=57, name=ScheduledTrigger-7-thread-2, state=WAITING, 
group=TGRP-TestLTROnSolrCloud] at 
java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11-ea/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1177)
 at 
java.base@11-ea/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)2) 
Thread[id=39, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestLTROnSolrCloud] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)3) 
Thread[id=48, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestLTROnSolrCloud] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)4) 
Thread[id=18, name=SessionTracker, state=TIMED_WAITING, 
group=TGRP-TestLTROnSolrCloud] at 
java.base@11-ea/java.lang.Object.wait(Native Method) at 
app//org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:147)
5) Thread[id=61, name=MetricsHistoryHandler-12-thread-1, 
state=TIMED_WAITING, group=TGRP-TestLTROnSolrCloud] at 
java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
java.base@11-ea/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
 at 
java.base@11-ea/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)6) 
Thread[id=81, name=qtp777350429-81, state=TIMED_WAITING, 
group=TGRP-TestLTROnSolrCloud] at 
java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
app//org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:653)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:717)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)7) 
Thread[id=30, 
name=qtp777350429-30-acceptor-0@5d98593b-ServerConnector@535fbcf8{SSL,[ssl, 
http/1.1]}{127.0.0.1:40849}, state=RUNNABLE, group=TGRP-TestLTROnSolrCloud] 
at java.base@11-ea/sun.nio.ch.ServerSocketChannelImpl.accept0(Native 
Method) at 
java.base@11-ea/sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:533)
 at 
java.base@11-ea/sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:285)
 at 
app//org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:369)  
   at 
app//org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:639)
 at 

[jira] [Commented] (SOLR-12607) Investigate ShardSplitTest failures

2018-08-01 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565519#comment-16565519
 ] 

Shalin Shekhar Mangar commented on SOLR-12607:
--

[~erickerickson] - I'm going to merge this tomorrow my time so if you have free 
resources, beast away on ShardSplitTest on branch {{jira/solr-12607}} and let 
me know how it looks. Thanks!

> Investigate ShardSplitTest failures
> ---
>
> Key: SOLR-12607
> URL: https://issues.apache.org/jira/browse/SOLR-12607
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> There have been many recent ShardSplitTest failures. 
> According to http://fucit.org/solr-jenkins-reports/failure-report.html
> {code}
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: testSplitWithChaosMonkey
> Failures: 72.32% (81 / 112)
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: test
> Failures: 26.79% (30 / 112)
> {code} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12610) Inject failures during synchronous update requests during shard splits

2018-08-01 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565465#comment-16565465
 ] 

Erick Erickson commented on SOLR-12610:
---

[~shalinmangar] Let me know if you'd like some beasting done on this either 
before or after you push it.

> Inject failures during synchronous update requests during shard splits
> --
>
> Key: SOLR-12610
> URL: https://issues.apache.org/jira/browse/SOLR-12610
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12610.patch
>
>
> In SOLR-12607, I found a bug where the StdNode's shard was not set correctly 
> causing exceptions during updates forwarded to sub-shard leaders to not be 
> sent back to the clients. This can cause data loss during split. A fix was 
> committed as part of SOLR-12607 but we need to expand coverage to this 
> situation. I'll add failure injection during the synchronous update step to 
> simulate this condition. This will be randomized for each shard split test 
> method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BadApple report. Seems like I'm wasting my time.

2018-08-01 Thread Erick Erickson
Alexandre:

Feel free! What I'm struggling with is not that someone checked in
some code that all the sudden started breaking things. Rather that a
test that's been working perfectly will fail once the won't
reproducibly fail again and does _not_ appear to be related to recent
code changes.

In fact that's the crux of the matter, it's difficult/impossible to
tell at a glance when a test fails whether it is or is not related to
a recent code change.

Erick

On Wed, Aug 1, 2018 at 8:05 AM, Alexandre Rafalovitch
 wrote:
> Just a completely random thought that I do not have deep knowledge for
> (still learning my way around Solr tests).
>
> Is this something that Machine Learning could help with? The Github
> repo/history is a fantastic source of learning on who worked on which
> file, how often, etc. We certainly should be able to get some 'most
> significant developer' stats out of that.
>
> Regards,
>Alex.
>
> On 1 August 2018 at 10:56, Erick Erickson  wrote:
>> Shawn:
>>
>> Trouble is there were 945 tests that failed at least once in the last
>> 4 weeks. And the trend is all over the map on a weekly basis.
>>
>> e-mail-2018-06-11.txt: There were 989 unannotated tests that failed
>> e-mail-2018-06-18.txt: There were 689 unannotated tests that failed
>> e-mail-2018-06-25.txt: There were 555 unannotated tests that failed
>> e-mail-2018-07-02.txt: There were 723 unannotated tests that failed
>> e-mail-2018-07-09.txt: There were 793 unannotated tests that failed
>> e-mail-2018-07-16.txt: There were 809 unannotated tests that failed
>> e-mail-2018-07-23.txt: There were 953 unannotated tests that failed
>> e-mail-2018-07-30.txt: There were 945 unannotated tests that failed
>>
>> I'm BadApple'ing tests that fail every week for the last 4 weeks on
>> the theory that those are not temporary issues (hey, we all commit
>> code that breaks something then have to figure out why and fix).
>>
>> I also have the feeling that somewhere, somehow, our test framework is
>> making some assumptions that are invalid. Or too strict. Or too fast.
>> Or there's some fundamental issue with some of our classes. Or... The
>> number of sporadic issues where the Object Tracker spits stuff out for
>> instance screams that some assumption we're making, either in the code
>> or in the test framework is flawed.
>>
>> What I don't know is how to make visible progress. It's discouraging
>> to fix something and then next week have more tests fail for unrelated
>> reasons.
>>
>> Visibility is the issue to me. We have no good way of saying "these
>> tests _just started failing for a reason. As a quick experiment, I
>> extended the triage to 10 weeks (no attempt to ascertain if these
>> tests even existed 10 weeks ago). Here are the tests that have _only_
>> failed in the last week, not the previous 9. BadApple'ing anything
>> that's only failed once seems overkill
>>
>> Although the test that failed 77 times does just stand out
>>
>> week pctruns  failstest
>> 00.2  460  1
>> CloudSolrClientTest.testVersionsAreReturned
>> 00.2  466  1
>> ComputePlanActionTest.testSelectedCollections
>> 00.2  464  1
>> ConfusionMatrixGeneratorTest.testGetConfusionMatrixWithBM25NB
>> 08.1   37  3  IndexSizeTriggerTest(suite)
>> 00.2  454  1  MBeansHandlerTest.testAddedMBeanDiff
>> 00.2  454  1  MBeansHandlerTest.testDiff
>> 00.2  455  1  MetricTriggerTest.test
>> 00.2  455  1  MetricsHandlerTest.test
>> 00.2  455  1  MetricsHandlerTest.testKeyMetrics
>> 00.2  453  1  RequestHandlersTest.testInitCount
>> 00.2  453  1  RequestHandlersTest.testStatistics
>> 00.2  453  1  ScheduledTriggerIntegrationTest(suite)
>> 00.2  451  1  
>> SearchRateTriggerTest.testWaitForElapsed
>> 00.2  425  1
>> SoftAutoCommitTest.testSoftCommitWithinAndHardCommitMaxTimeRapidAdds
>> 0   14.7  525 77
>> StreamExpressionTest.testSignificantTermsStream
>> 00.2  454  1  TestBadConfig(suite)
>> 00.2  465  1
>> TestBlockJoin.testMultiChildQueriesOfDiffParentLevels
>> 00.6  462  3
>> TestCloudCollectionsListeners.testCollectionDeletion
>> 00.2  456  1  TestInfoStreamLogging(suite)
>> 00.2  456  1  TestLazyCores.testLazySearch
>> 00.2  473  1
>> TestLucene70DocValuesFormat.testSortedSetAroundBlockSize
>> 0   15.4   26  4
>> TestMockDirectoryWrapper.testThreadSafetyInListAll
>> 00.2  454  1  TestNodeLostTrigger.testTrigger
>> 00.2  453  1  TestRecovery.stressLogReplay
>> 00.2  505  1
>> 

Re: BadApple report. Seems like I'm wasting my time.

2018-08-01 Thread Alexandre Rafalovitch
Just a completely random thought that I do not have deep knowledge for
(still learning my way around Solr tests).

Is this something that Machine Learning could help with? The Github
repo/history is a fantastic source of learning on who worked on which
file, how often, etc. We certainly should be able to get some 'most
significant developer' stats out of that.

Regards,
   Alex.

On 1 August 2018 at 10:56, Erick Erickson  wrote:
> Shawn:
>
> Trouble is there were 945 tests that failed at least once in the last
> 4 weeks. And the trend is all over the map on a weekly basis.
>
> e-mail-2018-06-11.txt: There were 989 unannotated tests that failed
> e-mail-2018-06-18.txt: There were 689 unannotated tests that failed
> e-mail-2018-06-25.txt: There were 555 unannotated tests that failed
> e-mail-2018-07-02.txt: There were 723 unannotated tests that failed
> e-mail-2018-07-09.txt: There were 793 unannotated tests that failed
> e-mail-2018-07-16.txt: There were 809 unannotated tests that failed
> e-mail-2018-07-23.txt: There were 953 unannotated tests that failed
> e-mail-2018-07-30.txt: There were 945 unannotated tests that failed
>
> I'm BadApple'ing tests that fail every week for the last 4 weeks on
> the theory that those are not temporary issues (hey, we all commit
> code that breaks something then have to figure out why and fix).
>
> I also have the feeling that somewhere, somehow, our test framework is
> making some assumptions that are invalid. Or too strict. Or too fast.
> Or there's some fundamental issue with some of our classes. Or... The
> number of sporadic issues where the Object Tracker spits stuff out for
> instance screams that some assumption we're making, either in the code
> or in the test framework is flawed.
>
> What I don't know is how to make visible progress. It's discouraging
> to fix something and then next week have more tests fail for unrelated
> reasons.
>
> Visibility is the issue to me. We have no good way of saying "these
> tests _just started failing for a reason. As a quick experiment, I
> extended the triage to 10 weeks (no attempt to ascertain if these
> tests even existed 10 weeks ago). Here are the tests that have _only_
> failed in the last week, not the previous 9. BadApple'ing anything
> that's only failed once seems overkill
>
> Although the test that failed 77 times does just stand out
>
> week pctruns  failstest
> 00.2  460  1
> CloudSolrClientTest.testVersionsAreReturned
> 00.2  466  1
> ComputePlanActionTest.testSelectedCollections
> 00.2  464  1
> ConfusionMatrixGeneratorTest.testGetConfusionMatrixWithBM25NB
> 08.1   37  3  IndexSizeTriggerTest(suite)
> 00.2  454  1  MBeansHandlerTest.testAddedMBeanDiff
> 00.2  454  1  MBeansHandlerTest.testDiff
> 00.2  455  1  MetricTriggerTest.test
> 00.2  455  1  MetricsHandlerTest.test
> 00.2  455  1  MetricsHandlerTest.testKeyMetrics
> 00.2  453  1  RequestHandlersTest.testInitCount
> 00.2  453  1  RequestHandlersTest.testStatistics
> 00.2  453  1  ScheduledTriggerIntegrationTest(suite)
> 00.2  451  1  SearchRateTriggerTest.testWaitForElapsed
> 00.2  425  1
> SoftAutoCommitTest.testSoftCommitWithinAndHardCommitMaxTimeRapidAdds
> 0   14.7  525 77
> StreamExpressionTest.testSignificantTermsStream
> 00.2  454  1  TestBadConfig(suite)
> 00.2  465  1
> TestBlockJoin.testMultiChildQueriesOfDiffParentLevels
> 00.6  462  3
> TestCloudCollectionsListeners.testCollectionDeletion
> 00.2  456  1  TestInfoStreamLogging(suite)
> 00.2  456  1  TestLazyCores.testLazySearch
> 00.2  473  1
> TestLucene70DocValuesFormat.testSortedSetAroundBlockSize
> 0   15.4   26  4
> TestMockDirectoryWrapper.testThreadSafetyInListAll
> 00.2  454  1  TestNodeLostTrigger.testTrigger
> 00.2  453  1  TestRecovery.stressLogReplay
> 00.2  505  1
> TestReplicationHandler.testRateLimitedReplication
> 00.2  425  1
> TestSolrCloudWithSecureImpersonation.testForwarding
> 00.9  461  4
> TestSolrDeletionPolicy1.testNumCommitsConfigured
> 00.2  454  1  TestSystemIdResolver(suite)
> 00.2  451  1  TestV2Request.testCloudSolrClient
> 00.2  451  1  TestV2Request.testHttpSolrClient
> 09.1   77  7
> TestWithCollection.testDeleteWithCollection
> 03.9   77  3
> TestWithCollection.testMoveReplicaWithCollection
>
> So I don't know what I'm going to do here, we'll 

Re: Lucene/Solr 8.0

2018-08-01 Thread Robert Muir
My only other suggestion is we may want to get Nick's shape stuff into
the sandbox module at least for 8.0 so that it can be tested out. I
think it looks like that wouldn't delay any October target though?

On Wed, Aug 1, 2018 at 9:51 AM, Adrien Grand  wrote:
> I'd like to revive this thread now that these new optimizations for
> collection of top docs are more usable and enabled by default in
> IndexSearcher (https://issues.apache.org/jira/browse/LUCENE-8060). Any
> feedback about starting to work towards releasing 8.0 and targeting October
> 2018?
>
>
> Le jeu. 21 juin 2018 à 09:31, Adrien Grand  a écrit :
>>
>> Hi Robert,
>>
>> I agree we need to make it more usable before 8.0. I would also like to
>> improve ReqOptSumScorer (https://issues.apache.org/jira/browse/LUCENE-8204)
>> to leverage impacts so that queries that incorporate queries on feature
>> fields (https://issues.apache.org/jira/browse/LUCENE-8197) in an optional
>> clause are also fast.
>>
>> Le jeu. 21 juin 2018 à 03:06, Robert Muir  a écrit :
>>>
>>> How can the end user actually use the biggest new feature: impacts and
>>> BMW? As far as I can tell, the issue to actually implement the
>>> necessary API changes (IndexSearcher/TopDocs/etc) is still open and
>>> unresolved, although there are some interesting ideas on it. This
>>> seems like a really big missing piece, without a proper API, the stuff
>>> is not really usable. I also can't imagine a situation where the API
>>> could be introduced in a followup minor release because it would be
>>> too invasive.
>>>
>>> On Mon, Jun 18, 2018 at 1:19 PM, Adrien Grand  wrote:
>>> > Hi all,
>>> >
>>> > I would like to start discussing releasing Lucene/Solr 8.0. Lucene 8
>>> > already
>>> > has some good changes around scoring, notably cleanups to
>>> > similarities[1][2][3], indexing of impacts[4], and an implementation of
>>> > Block-Max WAND[5] which, once combined, allow to run queries faster
>>> > when
>>> > total hit counts are not requested.
>>> >
>>> > [1] https://issues.apache.org/jira/browse/LUCENE-8116
>>> > [2] https://issues.apache.org/jira/browse/LUCENE-8020
>>> > [3] https://issues.apache.org/jira/browse/LUCENE-8007
>>> > [4] https://issues.apache.org/jira/browse/LUCENE-4198
>>> > [5] https://issues.apache.org/jira/browse/LUCENE-8135
>>> >
>>> > In terms of bug fixes, there is also a bad relevancy bug[6] which is
>>> > only in
>>> > 8.0 because it required a breaking change[7] to be implemented.
>>> >
>>> > [6] https://issues.apache.org/jira/browse/LUCENE-8031
>>> > [7] https://issues.apache.org/jira/browse/LUCENE-8134
>>> >
>>> > As usual, doing a new major release will also help age out old codecs,
>>> > which
>>> > in-turn make maintenance easier: 8.0 will no longer need to care about
>>> > the
>>> > fact that some codecs were initially implemented with a random-access
>>> > API
>>> > for doc values, that pre-7.0 indices encoded norms differently, or that
>>> > pre-6.2 indices could not record an index sort.
>>> >
>>> > I also expect that we will come up with ideas of things to do for 8.0
>>> > as we
>>> > feel that the next major is getting closer. In terms of planning, I was
>>> > thinking that we could target something like october 2018, which would
>>> > be
>>> > 12-13 months after 7.0 and 3-4 months from now.
>>> >
>>> > From a Solr perspective, the main change I'm aware of that would be
>>> > worth
>>> > releasing a new major is the Star Burst effort. Is it something we want
>>> > to
>>> > get in for 8.0?
>>> >
>>> > Adrien
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BadApple report. Seems like I'm wasting my time.

2018-08-01 Thread Erick Erickson
Shawn:

Trouble is there were 945 tests that failed at least once in the last
4 weeks. And the trend is all over the map on a weekly basis.

e-mail-2018-06-11.txt: There were 989 unannotated tests that failed
e-mail-2018-06-18.txt: There were 689 unannotated tests that failed
e-mail-2018-06-25.txt: There were 555 unannotated tests that failed
e-mail-2018-07-02.txt: There were 723 unannotated tests that failed
e-mail-2018-07-09.txt: There were 793 unannotated tests that failed
e-mail-2018-07-16.txt: There were 809 unannotated tests that failed
e-mail-2018-07-23.txt: There were 953 unannotated tests that failed
e-mail-2018-07-30.txt: There were 945 unannotated tests that failed

I'm BadApple'ing tests that fail every week for the last 4 weeks on
the theory that those are not temporary issues (hey, we all commit
code that breaks something then have to figure out why and fix).

I also have the feeling that somewhere, somehow, our test framework is
making some assumptions that are invalid. Or too strict. Or too fast.
Or there's some fundamental issue with some of our classes. Or... The
number of sporadic issues where the Object Tracker spits stuff out for
instance screams that some assumption we're making, either in the code
or in the test framework is flawed.

What I don't know is how to make visible progress. It's discouraging
to fix something and then next week have more tests fail for unrelated
reasons.

Visibility is the issue to me. We have no good way of saying "these
tests _just started failing for a reason. As a quick experiment, I
extended the triage to 10 weeks (no attempt to ascertain if these
tests even existed 10 weeks ago). Here are the tests that have _only_
failed in the last week, not the previous 9. BadApple'ing anything
that's only failed once seems overkill

Although the test that failed 77 times does just stand out

week pctruns  failstest
00.2  460  1
CloudSolrClientTest.testVersionsAreReturned
00.2  466  1
ComputePlanActionTest.testSelectedCollections
00.2  464  1
ConfusionMatrixGeneratorTest.testGetConfusionMatrixWithBM25NB
08.1   37  3  IndexSizeTriggerTest(suite)
00.2  454  1  MBeansHandlerTest.testAddedMBeanDiff
00.2  454  1  MBeansHandlerTest.testDiff
00.2  455  1  MetricTriggerTest.test
00.2  455  1  MetricsHandlerTest.test
00.2  455  1  MetricsHandlerTest.testKeyMetrics
00.2  453  1  RequestHandlersTest.testInitCount
00.2  453  1  RequestHandlersTest.testStatistics
00.2  453  1  ScheduledTriggerIntegrationTest(suite)
00.2  451  1  SearchRateTriggerTest.testWaitForElapsed
00.2  425  1
SoftAutoCommitTest.testSoftCommitWithinAndHardCommitMaxTimeRapidAdds
0   14.7  525 77
StreamExpressionTest.testSignificantTermsStream
00.2  454  1  TestBadConfig(suite)
00.2  465  1
TestBlockJoin.testMultiChildQueriesOfDiffParentLevels
00.6  462  3
TestCloudCollectionsListeners.testCollectionDeletion
00.2  456  1  TestInfoStreamLogging(suite)
00.2  456  1  TestLazyCores.testLazySearch
00.2  473  1
TestLucene70DocValuesFormat.testSortedSetAroundBlockSize
0   15.4   26  4
TestMockDirectoryWrapper.testThreadSafetyInListAll
00.2  454  1  TestNodeLostTrigger.testTrigger
00.2  453  1  TestRecovery.stressLogReplay
00.2  505  1
TestReplicationHandler.testRateLimitedReplication
00.2  425  1
TestSolrCloudWithSecureImpersonation.testForwarding
00.9  461  4
TestSolrDeletionPolicy1.testNumCommitsConfigured
00.2  454  1  TestSystemIdResolver(suite)
00.2  451  1  TestV2Request.testCloudSolrClient
00.2  451  1  TestV2Request.testHttpSolrClient
09.1   77  7
TestWithCollection.testDeleteWithCollection
03.9   77  3
TestWithCollection.testMoveReplicaWithCollection

So I don't know what I'm going to do here, we'll see if I get more
optimistic when the fog lifts.

Erick

On Wed, Aug 1, 2018 at 7:15 AM, Shawn Heisey  wrote:
> On 7/30/2018 11:52 AM, Erick Erickson wrote:
>>
>> Is anybody paying the least attention to this or should I just stop
>> bothering?
>
>
> The job you're doing is thankless.  That's the nature of the work.  I'd love
> to have the time to really help you out. If only my employer didn't expect
> me to spend so much time *working*!
>
>> I'd hoped to get to a point where we could get at least semi-stable
>> and start whittling away at the backlog. But with an additional 63
>> tests to 

[jira] [Commented] (LUCENE-8439) DisjunctionMaxScorer should leverage sub scorers' per-block max scores

2018-08-01 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565438#comment-16565438
 ] 

Jim Ferenczi commented on LUCENE-8439:
--

Thanks for looking Adrien. I pushed a new patch that factors out a BlockMaxDISI 
implementation and uses it only if the score mode is set to TOP_SCORES. 

{quote}
 I guess it works well because scores on title dominate the overall score?
{quote}

Yes, this optim works best when the small field (here title) dominates the 
overall score.

> DisjunctionMaxScorer should leverage sub scorers' per-block max scores
> --
>
> Key: LUCENE-8439
> URL: https://issues.apache.org/jira/browse/LUCENE-8439
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8439.patch, LUCENE-8439.patch
>
>
> This issue is similar to https://issues.apache.org/jira/browse/LUCENE-8204 
> but for the DisjunctionMaxScorer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8439) DisjunctionMaxScorer should leverage sub scorers' per-block max scores

2018-08-01 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-8439:
-
Attachment: LUCENE-8439.patch

> DisjunctionMaxScorer should leverage sub scorers' per-block max scores
> --
>
> Key: LUCENE-8439
> URL: https://issues.apache.org/jira/browse/LUCENE-8439
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8439.patch, LUCENE-8439.patch
>
>
> This issue is similar to https://issues.apache.org/jira/browse/LUCENE-8204 
> but for the DisjunctionMaxScorer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8441) Wrong index sort field type throws unexpected NullPointerException

2018-08-01 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565418#comment-16565418
 ] 

Michael McCandless commented on LUCENE-8441:


+1, thanks for fixing so quickly [~jim.ferenczi]!

We could maybe improve the new per-DV-field-per-document check so that instead 
of doing a for loop over all index sort fields, we add a new member to the 
{{PerField}} in {{DefaultIndexingChain}} e.g. 
{{requiredDocValuesSortFieldType}} or so?  So we would do that for loop through 
all index sort fields only when creating a new {{PerField}} (first time this 
in-memory segment sees this field being indexed).

If that is non-null (meaning that field was included in the index sort), we 
check that it's the same type as what the user is now trying to index?

But this can come later ... it's just a small performance improvement over the 
functionally correct patch you created.  Thanks!

> Wrong index sort field type throws unexpected NullPointerException
> --
>
> Key: LUCENE-8441
> URL: https://issues.apache.org/jira/browse/LUCENE-8441
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Minor
> Attachments: LUCENE-8441.patch, LUCENE-8441.patch
>
>
> I came across this scary exception if you pass the wrong {{SortField.Type}} 
> for a field; I'll attach patch w/ small test case:
> {noformat}
> 1) testWrongSortFieldType(org.apache.lucene.index.TestIndexSorting)
> java.lang.NullPointerException
> at 
> __randomizedtesting.SeedInfo.seed([995FF58C7B184E8F:B0CC507647B2ED95]:0)
> at 
> org.apache.lucene.index.SortingTermVectorsConsumer.abort(SortingTermVectorsConsumer.java:87)
> at org.apache.lucene.index.TermsHash.abort(TermsHash.java:68)
> at 
> org.apache.lucene.index.DefaultIndexingChain.abort(DefaultIndexingChain.java:332)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.abort(DocumentsWriterPerThread.java:138)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.maybeAbort(DocumentsWriterPerThread.java:532)
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:524)
> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:554)
> at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:719)
> at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3201)
> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3446)
> at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3411)
> at 
> org.apache.lucene.index.TestIndexSorting.testWrongSortFieldType(TestIndexSorting.java:2489)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12509) Improve SplitShardCmd performance and reliability

2018-08-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565410#comment-16565410
 ] 

ASF subversion and git services commented on SOLR-12509:


Commit 1133bf98a5fd075173efecfb75a51493fceb62b3 in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1133bf9 ]

SOLR-12509: Improve SplitShardCmd performance and reliability.


> Improve SplitShardCmd performance and reliability
> -
>
> Key: SOLR-12509
> URL: https://issues.apache.org/jira/browse/SOLR-12509
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-12509.patch, SOLR-12509.patch
>
>
> {{SplitShardCmd}} is currently quite complex.
> Shard splitting occurs on active shards, which are still being updated, so 
> the splitting has to involve several carefully orchestrated steps, making 
> sure that new sub-shard placeholders are properly created and visible, and 
> then also applying buffered updates to the split leaders and performing 
> recovery on sub-shard replicas.
> This process could be simplified in cases where collections are not actively 
> being updated or can tolerate a little downtime - we could put the shard 
> "offline", ie. disable writing while the splitting is in progress (in order 
> to avoid users' confusion we should disable writing to the whole collection).
> The actual index splitting could perhaps be improved to use 
> {{HardLinkCopyDirectoryWrapper}} for creating a copy of the index by 
> hard-linking existing index segments, and then applying deletes to the 
> documents that don't belong in a sub-shard. However, the resulting index 
> slices that replicas would have to pull would be the same size as the whole 
> shard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12344) SolrSlf4jReporter doesn't set MDC context

2018-08-01 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  reassigned SOLR-12344:


Assignee: Andrzej Bialecki 

> SolrSlf4jReporter doesn't set MDC context
> -
>
> Key: SOLR-12344
> URL: https://issues.apache.org/jira/browse/SOLR-12344
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Varun Thacker
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> I setup a slf4j reporter like this on master
> solr.xml
> {code:java}
> 
>class="org.apache.solr.metrics.reporters.SolrSlf4jReporter">
> 1
> UPDATE./update.requestTimes
> update_logger
>   
> {code}
> log4j2.xml
> {code:java}
> 
> 
> 
>   
> 
>   
> 
>   %-4r [%t] %-5p %c %x [%X{collection} %X{shard} %X{replica} 
> %X{core}] %c; %m%n
> 
>   
> 
>  name="RollingFile"
> fileName="${sys:solr.log.dir}/solr.log"
> filePattern="${sys:solr.log.dir}/solr.log.%i" >
>   
> 
>   %-5p - %d{-MM-dd HH:mm:ss.SSS}; [%X{collection} %X{shard} 
> %X{replica} %X{core}] %c; %m%n
> 
>   
>   
> 
> 
>   
>   
> 
>  name="RollingMetricFile"
> fileName="${sys:solr.log.dir}/solr_metric.log"
> filePattern="${sys:solr.log.dir}/solr_metric.log.%i" >
>   
> 
>   %-5p - %d{-MM-dd HH:mm:ss.SSS}; [%X{collection} %X{shard} 
> %X{replica} %X{core}] %c; %m%n
> 
>   
>   
> 
> 
>   
>   
> 
>   
>   
> 
> 
> 
> 
>   
> 
> 
>   
>   
> 
>   
> 
> {code}
> The output I get from the solr_metric.log file is like this
> {code:java}
> INFO  - 2018-05-11 15:31:16.009; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0, 
> mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second, 
> duration_unit=milliseconds
> INFO  - 2018-05-11 15:31:17.010; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0, 
> mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second, 
> duration_unit=milliseconds
> INFO  - 2018-05-11 15:31:18.010; [   ] update_logger; type=TIMER, 
> name=UPDATE./update.requestTimes, count=0, min=0.0, max=0.0, mean=0.0, 
> stddev=0.0, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0, 
> mean_rate=0.0, m1=0.0, m5=0.0, m15=0.0, rate_unit=events/second, 
> duration_unit=milliseconds{code}
> On a JVM which has multiple cores, this will become impossible to tell where 
> it's coming from if MDC context is not set



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-08-01 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r206901995
  
--- Diff: 
solr/core/src/test/org/apache/solr/response/transform/TestDeeplyNestedChildDocTransformer.java
 ---
@@ -168,35 +172,57 @@ private static String id() {
 return "" + counter.incrementAndGet();
   }
 
+  private static void cleanSolrDocumentFields(SolrDocument input) {
+for(Map.Entry field: input) {
+  Object val = field.getValue();
+  if(val instanceof Collection) {
+Object newVals = ((Collection) val).stream().map((item) -> 
(cleanIndexableField(item)))
+.collect(Collectors.toList());
+input.setField(field.getKey(), newVals);
+continue;
+  } else {
+input.setField(field.getKey(), 
cleanIndexableField(field.getValue()));
+  }
+}
+  }
+
+  private static Object cleanIndexableField(Object field) {
+if(field instanceof IndexableField) {
+  return ((IndexableField) field).stringValue();
+} else if(field instanceof SolrDocument) {
+  cleanSolrDocumentFields((SolrDocument) field);
+}
+return field;
+  }
+
   private static String grandChildDocTemplate(int id) {
 int docNum = id / 8; // the index of docs sent to solr in the 
AddUpdateCommand. e.g. first doc is 0
-return 
"SolrDocument{id=stored,indexed,tokenized,omitNorms,indexOptions=DOCS, type_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS], 
name_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS], " +
-
"_root_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_root_:" + id + 
">, " +
-
"toppings=[SolrDocument{id=stored,indexed,tokenized,omitNorms,indexOptions=DOCS, 
type_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS], 
_nest_parent_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_nest_parent_:"
 + id + ">, " +
-
"_root_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_root_:" + id + 
">, " +
-
"ingredients=[SolrDocument{id=stored,indexed,tokenized,omitNorms,indexOptions=DOCS, 
name_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS], " +
-
"_nest_parent_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_nest_parent_:"
 + (id + 3) + ">, 
_root_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_root_:" + id + 
">}]}, " +
-
"SolrDocument{id=stored,indexed,tokenized,omitNorms,indexOptions=DOCS, 
type_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS],
 
_nest_parent_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_nest_parent_:"
 + id + ">, " +
-
"_root_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_root_:" + id + 
">, " +
-
"ingredients=[SolrDocument{id=stored,indexed,tokenized,omitNorms,indexOptions=DOCS, 
name_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS], 
_nest_parent_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_nest_parent_:"
 + (id + 5)+ ">, " +
-
"_root_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_root_:" + id + 
">}, " +
-
"SolrDocument{id=stored,indexed,tokenized,omitNorms,indexOptions=DOCS, 
name_s=[stored,indexed,tokenized,omitNorms,indexOptions=DOCS], 
_nest_parent_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_nest_parent_:"
 + (id + 5) + ">, " +
-
"_root_=stored,indexed,tokenized,omitNorms,indexOptions=DOCS<_root_:" + id + 
">}]}]}";
+return "SolrDocument{id="+ id + ", type_s=[" + types[docNum % 
types.length] + "], name_s=[" + names[docNum % names.length] + "], " +
--- End diff --

Keeping one ID is fine; we certainly don't need additional ones.  Maybe 
consider using letters or names for IDs instead of incrementing counters.  
Anything to help make reading a doc/child structure more readily apparent.  
Anything to reduce string interpolation here is also a win IMO.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: query other solr collection from within a solr plugin

2018-08-01 Thread Nicolas Franck
Alright, as I expected.

Thanks for all your help!

> On 1 Aug 2018, at 16:22, Shawn Heisey  wrote:
> 
> On 8/1/2018 12:59 AM, Nicolas Franck wrote:
>> @Mikhail Khludnev: thanks for your response
>> 
>> You mean something like this (source collection is "collection1", and I want 
>> to query "collection2"):
>> 
>>   SolrClient solrClient = new EmbeddedSolrServer(req.getCore());
>>   ModifiableSolrParams newParams = new ModifiableSolrParams();
>>   newParams.add("collection","collection2");
>>   SolrDocumentList docs = solrClient.getById(ids, newParams);
>> 
>> which is basically the same as:
>> 
>> http://localhost:8983/collection1_shard1_replica_n1/select?collection=collection2&=/get=myid
>> 
> 
> An instance of EmbeddedSolrServer has no http access. Also, it can't do 
> SolrCloud -- because SolrCloud requires http. EmbeddedSolrServer is a 
> complete Solr server running in standalone (not cloud) mode, without http 
> access.
> 
> If you're doing this code within a Solr plugin, then you can't start an 
> EmbeddedSolrServer on one of the cores from the Solr install.  Any cores you 
> try to use will already be open, so the embedded server will not be able to 
> open them.  Since you're already running inside a Solr server, there's no 
> reason to start *another* Solr server.
> 
> I don't have any other ideas for you.  I've written a couple of update 
> processors for Solr, but nothing that does queries.
> 
>> but in both cases, I get this error:
>> 
>> org.apache.solr.common.SolrException: Can't find shard 'collection1_shard1' 
>> at 
>> org.apache.solr.handler.component.RealTimeGetComponent.sliceToShards(RealTimeGetComponent.java:897)
>>  at
>> 
>> 
>> apparently it forgets its own cloud information? What am I missing here?
> 
> As already mentioned, EmbeddedSolrServer can't do SolrCloud.
> 
> Thanks,
> Shawn
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: query other solr collection from within a solr plugin

2018-08-01 Thread Shawn Heisey

On 8/1/2018 12:59 AM, Nicolas Franck wrote:

@Mikhail Khludnev: thanks for your response

You mean something like this (source collection is "collection1", and 
I want to query "collection2"):


  SolrClient solrClient = new EmbeddedSolrServer(req.getCore());
  ModifiableSolrParams newParams = new ModifiableSolrParams();
  newParams.add("collection","collection2");
  SolrDocumentList docs = solrClient.getById(ids, newParams);

which is basically the same as:

http://localhost:8983/collection1_shard1_replica_n1/select?collection=collection2&=/get=myid



An instance of EmbeddedSolrServer has no http access. Also, it can't do 
SolrCloud -- because SolrCloud requires http. EmbeddedSolrServer is a 
complete Solr server running in standalone (not cloud) mode, without 
http access.


If you're doing this code within a Solr plugin, then you can't start an 
EmbeddedSolrServer on one of the cores from the Solr install.  Any cores 
you try to use will already be open, so the embedded server will not be 
able to open them.  Since you're already running inside a Solr server, 
there's no reason to start *another* Solr server.


I don't have any other ideas for you.  I've written a couple of update 
processors for Solr, but nothing that does queries.



but in both cases, I get this error:

org.apache.solr.common.SolrException: Can't find shard 
'collection1_shard1' at 
org.apache.solr.handler.component.RealTimeGetComponent.sliceToShards(RealTimeGetComponent.java:897) 
at



apparently it forgets its own cloud information? What am I missing here?


As already mentioned, EmbeddedSolrServer can't do SolrCloud.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1113 - Unstable

2018-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1113/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/275/consoleText

[repro] Revision: a9f129190f9065c8775a628df181fb53248db488

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=CdcrBidirectionalTest 
-Dtests.method=testBiDir -Dtests.seed=BB4DA94708A8BE16 -Dtests.multiplier=2 
-Dtests.locale=ar-LY -Dtests.timezone=Africa/Nouakchott -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testFailedMove -Dtests.seed=BB4DA94708A8BE16 
-Dtests.multiplier=2 -Dtests.locale=ro-RO -Dtests.timezone=Asia/Phnom_Penh 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
e56c8722ce99338e980b32e100b96f2c19af9ddf
[repro] git fetch
[repro] git checkout a9f129190f9065c8775a628df181fb53248db488

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   CdcrBidirectionalTest
[repro]   MoveReplicaHDFSTest
[repro] ant compile-test

[...truncated  lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.CdcrBidirectionalTest|*.MoveReplicaHDFSTest" 
-Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=BB4DA94708A8BE16 -Dtests.multiplier=2 -Dtests.locale=ar-LY 
-Dtests.timezone=Africa/Nouakchott -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 4432 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
[repro]   1/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro] git checkout e56c8722ce99338e980b32e100b96f2c19af9ddf

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: BadApple report. Seems like I'm wasting my time.

2018-08-01 Thread Shawn Heisey

On 7/30/2018 11:52 AM, Erick Erickson wrote:

Is anybody paying the least attention to this or should I just stop bothering?


The job you're doing is thankless.  That's the nature of the work.  I'd 
love to have the time to really help you out. If only my employer didn't 
expect me to spend so much time *working*!



I'd hoped to get to a point where we could get at least semi-stable
and start whittling away at the backlog. But with an additional 63
tests to BadApple (a little fudging here because of some issues with
counting suite-level tests .vs. individual test) it doesn't seem like
we're going in the right direction at all.

Unless there's some value here, defined by people stepping up and at
least looking (and once a week is not asking too much) at the names of
the tests I'm going to BadApple to see if they ring any bells, I'll
stop wasting my time.


Here's a crazy thought, which might be something you already 
considered:  Try to figure out which tests pass consistently and 
BadApple *all the rest* of the Solr tests.  If there are any Lucene 
tests that fail with some regularity, BadApple those too.


There are probably disadvantages to this approach, but here are the 
advantages I can think of:  1) The noise stops quickly. 2) Future heroic 
efforts will result in measurable progress -- to quote you, "whittling 
away at the backlog."


Thank you a million times over for all the care and effort you've put 
into this.


Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2463 - Still Unstable!

2018-08-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2463/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC

19 tests failed.
FAILED:  org.apache.lucene.geo.TestGeoUtils.testBoundingBoxOpto

Error Message:
5

Stack Trace:
java.lang.ArrayIndexOutOfBoundsException: 5
at 
__randomizedtesting.SeedInfo.seed([5B728A0FD81F3A23:24CBD2C0F832552C]:0)
at org.apache.lucene.geo.GeoTestUtil.nextPointNear(GeoTestUtil.java:249)
at org.apache.lucene.geo.GeoTestUtil.nextPointNear(GeoTestUtil.java:223)
at 
org.apache.lucene.geo.TestGeoUtils.testBoundingBoxOpto(TestGeoUtils.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.lucene.geo.TestGeoUtils.testHaversinOpto

Error Message:
5

Stack Trace:
java.lang.ArrayIndexOutOfBoundsException: 5
at 
__randomizedtesting.SeedInfo.seed([5B728A0FD81F3A23:EDA971FF1A7180C9]:0)
at org.apache.lucene.geo.GeoTestUtil.nextPointNear(GeoTestUtil.java:257)
at org.apache.lucene.geo.GeoTestUtil.nextPointNear(GeoTestUtil.java:223)
at 
org.apache.lucene.geo.TestGeoUtils.testHaversinOpto(TestGeoUtils.java:135)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-08-01 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r206892630
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
--- End diff --

Yes; it can do both easily enough I think?  A separate method could take 
over for the existing/legacy case.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-01 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565364#comment-16565364
 ] 

David Smiley commented on SOLR-12519:
-

That sounds useful – definitely a separate issue.  Feel free to file it; even 
if you're not sure it'll go anywhere.  Wether it's "too high" or not is 
application/scenario dependent.  Most apps need to update their documents, and 
supporting block-join configurations would be a nice convenience over making a 
client have to resend the block.

> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12519-no-commit.patch
>
>  Time Spent: 13h 10m
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.0

2018-08-01 Thread Adrien Grand
I'd like to revive this thread now that these new optimizations for
collection of top docs are more usable and enabled by default in
IndexSearcher (https://issues.apache.org/jira/browse/LUCENE-8060). Any
feedback about starting to work towards releasing 8.0 and targeting October
2018?

Le jeu. 21 juin 2018 à 09:31, Adrien Grand  a écrit :

> Hi Robert,
>
> I agree we need to make it more usable before 8.0. I would also like to
> improve ReqOptSumScorer (https://issues.apache.org/jira/browse/LUCENE-8204)
> to leverage impacts so that queries that incorporate queries on feature
> fields (https://issues.apache.org/jira/browse/LUCENE-8197) in an optional
> clause are also fast.
>
> Le jeu. 21 juin 2018 à 03:06, Robert Muir  a écrit :
>
>> How can the end user actually use the biggest new feature: impacts and
>> BMW? As far as I can tell, the issue to actually implement the
>> necessary API changes (IndexSearcher/TopDocs/etc) is still open and
>> unresolved, although there are some interesting ideas on it. This
>> seems like a really big missing piece, without a proper API, the stuff
>> is not really usable. I also can't imagine a situation where the API
>> could be introduced in a followup minor release because it would be
>> too invasive.
>>
>> On Mon, Jun 18, 2018 at 1:19 PM, Adrien Grand  wrote:
>> > Hi all,
>> >
>> > I would like to start discussing releasing Lucene/Solr 8.0. Lucene 8
>> already
>> > has some good changes around scoring, notably cleanups to
>> > similarities[1][2][3], indexing of impacts[4], and an implementation of
>> > Block-Max WAND[5] which, once combined, allow to run queries faster when
>> > total hit counts are not requested.
>> >
>> > [1] https://issues.apache.org/jira/browse/LUCENE-8116
>> > [2] https://issues.apache.org/jira/browse/LUCENE-8020
>> > [3] https://issues.apache.org/jira/browse/LUCENE-8007
>> > [4] https://issues.apache.org/jira/browse/LUCENE-4198
>> > [5] https://issues.apache.org/jira/browse/LUCENE-8135
>> >
>> > In terms of bug fixes, there is also a bad relevancy bug[6] which is
>> only in
>> > 8.0 because it required a breaking change[7] to be implemented.
>> >
>> > [6] https://issues.apache.org/jira/browse/LUCENE-8031
>> > [7] https://issues.apache.org/jira/browse/LUCENE-8134
>> >
>> > As usual, doing a new major release will also help age out old codecs,
>> which
>> > in-turn make maintenance easier: 8.0 will no longer need to care about
>> the
>> > fact that some codecs were initially implemented with a random-access
>> API
>> > for doc values, that pre-7.0 indices encoded norms differently, or that
>> > pre-6.2 indices could not record an index sort.
>> >
>> > I also expect that we will come up with ideas of things to do for 8.0
>> as we
>> > feel that the next major is getting closer. In terms of planning, I was
>> > thinking that we could target something like october 2018, which would
>> be
>> > 12-13 months after 7.0 and 3-4 months from now.
>> >
>> > From a Solr perspective, the main change I'm aware of that would be
>> worth
>> > releasing a new major is the Star Burst effort. Is it something we want
>> to
>> > get in for 8.0?
>> >
>> > Adrien
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


Re: [1/2] lucene-solr:master: Make the nightly test smaller so that it does not fail with GC overhead exceeded (OOM). Clean up random number fetching to make it shorter.

2018-08-01 Thread Adrien Grand
Sorry Dawid this commit triggered lots of off-by-1 errors since
randomInt(X) returns numbers up to X included while random.nextInt(X)
returns numbers up to X-1 so I reverted it to stop the flood of test
failures on our internal CI server. I'll re-apply the part that decreases
the test size.

Le mer. 1 août 2018 à 14:05,  a écrit :

> Repository: lucene-solr
> Updated Branches:
>   refs/heads/branch_7x fd2cc195f -> 7396da542
>   refs/heads/master 5d7df -> 3203e99d8
>
>
> Make the nightly test smaller so that it does not fail with GC overhead
> exceeded (OOM). Clean up random number fetching to make it shorter.
>
>
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/3203e99d
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/3203e99d
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/3203e99d
>
> Branch: refs/heads/master
> Commit: 3203e99d8fbcaac3458fcf882d4ec229f97dfa43
> Parents: 5d7
> Author: Dawid Weiss 
> Authored: Wed Aug 1 13:49:39 2018 +0200
> Committer: Dawid Weiss 
> Committed: Wed Aug 1 14:05:02 2018 +0200
>
> --
>  .../lucene/document/TestLatLonShapeQueries.java | 15 +++--
>  .../java/org/apache/lucene/geo/GeoTestUtil.java | 70 ++--
>  2 files changed, 43 insertions(+), 42 deletions(-)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3203e99d/lucene/sandbox/src/test/org/apache/lucene/document/TestLatLonShapeQueries.java
> --
> diff --git
> a/lucene/sandbox/src/test/org/apache/lucene/document/TestLatLonShapeQueries.java
> b/lucene/sandbox/src/test/org/apache/lucene/document/TestLatLonShapeQueries.java
> index 03941b9..21d4e83 100644
> ---
> a/lucene/sandbox/src/test/org/apache/lucene/document/TestLatLonShapeQueries.java
> +++
> b/lucene/sandbox/src/test/org/apache/lucene/document/TestLatLonShapeQueries.java
> @@ -45,6 +45,8 @@ import org.apache.lucene.util.FixedBitSet;
>  import org.apache.lucene.util.IOUtils;
>  import org.apache.lucene.util.LuceneTestCase;
>
> +import static
> com.carrotsearch.randomizedtesting.RandomizedTest.randomBoolean;
> +import static com.carrotsearch.randomizedtesting.RandomizedTest.randomInt;
>  import static org.apache.lucene.geo.GeoEncodingUtils.decodeLatitude;
>  import static org.apache.lucene.geo.GeoEncodingUtils.decodeLongitude;
>  import static org.apache.lucene.geo.GeoEncodingUtils.encodeLatitude;
> @@ -104,7 +106,7 @@ public class TestLatLonShapeQueries extends
> LuceneTestCase {
>
>@Nightly
>public void testRandomBig() throws Exception {
> -doTestRandom(20);
> +doTestRandom(5);
>}
>
>private void doTestRandom(int count) throws Exception {
> @@ -116,7 +118,7 @@ public class TestLatLonShapeQueries extends
> LuceneTestCase {
>
>  Polygon[] polygons = new Polygon[numPolygons];
>  for (int id = 0; id < numPolygons; ++id) {
> -  int x = random().nextInt(20);
> +  int x = randomInt(20);
>if (x == 17) {
>  polygons[id] = null;
>  if (VERBOSE) {
> @@ -127,6 +129,7 @@ public class TestLatLonShapeQueries extends
> LuceneTestCase {
>  polygons[id] = GeoTestUtil.nextPolygon();
>}
>  }
> +
>  verify(polygons);
>}
>
> @@ -173,8 +176,8 @@ public class TestLatLonShapeQueries extends
> LuceneTestCase {
>  poly2D[id] = Polygon2D.create(quantizePolygon(polygons[id]));
>}
>w.addDocument(doc);
> -  if (id > 0 && random().nextInt(100) == 42) {
> -int idToDelete = random().nextInt(id);
> +  if (id > 0 && randomInt(100) == 42) {
> +int idToDelete = randomInt(id);
>  w.deleteDocuments(new Term("id", ""+idToDelete));
>  deleted.add(idToDelete);
>  if (VERBOSE) {
> @@ -183,7 +186,7 @@ public class TestLatLonShapeQueries extends
> LuceneTestCase {
>}
>  }
>
> -if (random().nextBoolean()) {
> +if (randomBoolean()) {
>w.forceMerge(1);
>  }
>  final IndexReader r = DirectoryReader.open(w);
> @@ -198,7 +201,7 @@ public class TestLatLonShapeQueries extends
> LuceneTestCase {
>
>  for (int iter = 0; iter < iters; ++iter) {
>if (VERBOSE) {
> -System.out.println("\nTEST: iter=" + (iter+1) + " of " + iters +
> " s=" + s);
> +System.out.println("\nTEST: iter=" + (iter + 1) + " of " + iters
> + " s=" + s);
>}
>
>// BBox
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3203e99d/lucene/test-framework/src/java/org/apache/lucene/geo/GeoTestUtil.java
> --
> diff --git
> a/lucene/test-framework/src/java/org/apache/lucene/geo/GeoTestUtil.java
> b/lucene/test-framework/src/java/org/apache/lucene/geo/GeoTestUtil.java
> index 

[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-01 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565350#comment-16565350
 ] 

Cassandra Targett commented on SOLR-8207:
-

+1 Jan, it's looking great IMO. I can't think of any substantive feedback at 
the moment.

If no one else has any major issues with the current state, I think it would be 
great if we could get it into the next release (7.5). If more ideas come up 
later, we can iterate on those in later versions.

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565336#comment-16565336
 ] 

Jan Høydahl commented on SOLR-8207:
---

Added {{ng-model-options='\{ debounce: 500 }'}} to all input boxes to avoid 
reloading all metrics for every keystroke in filter input boxes.

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1111 - Unstable

2018-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro//

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/112/consoleText

[repro] Revision: 96e985a3483f10537ea835a339f89dd10839dae3

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testCooldown -Dtests.seed=3874D4689786D9E0 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-AR 
-Dtests.timezone=Europe/Monaco -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testSearchRate -Dtests.seed=3874D4689786D9E0 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=es-AR -Dtests.timezone=Europe/Monaco -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=3874D4689786D9E0 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=it 
-Dtests.timezone=Asia/Dubai -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=3874D4689786D9E0 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=it 
-Dtests.timezone=Asia/Dubai -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerIntegrationTest 
-Dtests.method=testDeleteNode -Dtests.seed=3874D4689786D9E0 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=pt 
-Dtests.timezone=CNT -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=3874D4689786D9E0 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=cs 
-Dtests.timezone=Africa/Blantyre -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest -Dtests.method=test 
-Dtests.seed=3874D4689786D9E0 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=de-LU -Dtests.timezone=America/Costa_Rica 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=testSplitWithChaosMonkey -Dtests.seed=3874D4689786D9E0 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=de-LU -Dtests.timezone=America/Costa_Rica -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testNodeLost -Dtests.seed=3874D4689786D9E0 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-PY 
-Dtests.timezone=Atlantic/Cape_Verde -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=MaxSizeAutoCommitTest 
-Dtests.method=deleteTest -Dtests.seed=3874D4689786D9E0 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=vi-VN 
-Dtests.timezone=Asia/Jakarta -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=StreamDecoratorTest 
-Dtests.method=testParallelExecutorStream -Dtests.seed=EF6F3C58E3F3D0F8 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=tr 
-Dtests.timezone=Africa/Kinshasa -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=StreamDecoratorTest 
-Dtests.seed=EF6F3C58E3F3D0F8 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=tr -Dtests.timezone=Africa/Kinshasa 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5d7df7b0d1b122976a10cf05ace13a9ad6e1
[repro] git fetch
[repro] git checkout 96e985a3483f10537ea835a339f89dd10839dae3

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestTriggerIntegration
[repro]   IndexSizeTriggerTest
[repro]   ShardSplitTest
[repro]   MaxSizeAutoCommitTest
[repro]   SearchRateTriggerIntegrationTest
[repro]   TestLargeCluster
[repro]   ScheduledMaintenanceTriggerTest
[repro]solr/solrj
[repro]   StreamDecoratorTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=35 
-Dtests.class="*.TestTriggerIntegration|*.IndexSizeTriggerTest|*.ShardSplitTest|*.MaxSizeAutoCommitTest|*.SearchRateTriggerIntegrationTest|*.TestLargeCluster|*.ScheduledMaintenanceTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=3874D4689786D9E0 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-AR 
-Dtests.timezone=Europe/Monaco 

[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-08-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565314#comment-16565314
 ] 

Jan Høydahl commented on SOLR-8207:
---

Fixed paging.
 * Next/prev buttons now show up at right places, even when filtered by node 
name.
 * Reset to first page if filters change
 * Removed 'health' filter. You can only filter by host/node or collection
 * Clarified that paging is per host, not per node
 * Always display filtering input boxes

Pushed changes to AWS, please test again at 
[http://34.253.124.99:9000/solr/#/~cloud] 

Hint: Try to type "filter" in collections filter to narrow down to the one node 
that has that collection. Try to filter node names by IP address or port number

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-master - Build # 2646 - Still Unstable

2018-08-01 Thread Adrien Grand
I just pushed a fix for this failure.

Le mer. 1 août 2018 à 14:52, Apache Jenkins Server <
jenk...@builds.apache.org> a écrit :

> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2646/
>
> 1 tests failed.
> FAILED:
> org.apache.lucene.search.highlight.HighlighterTest.testToChildBlockJoinQuery
>
> Error Message:
> Cannot get impacts until the iterator is positioned or advanceShallow has
> been called
>
> Stack Trace:
> java.lang.AssertionError: Cannot get impacts until the iterator is
> positioned or advanceShallow has been called
> at
> __randomizedtesting.SeedInfo.seed([ACC8B0D6AB636F1E:8A12765808A8DAF2]:0)
> at
> org.apache.lucene.index.AssertingLeafReader$AssertingImpactsEnum.getImpacts(AssertingLeafReader.java:475)
> at
> org.apache.lucene.search.MaxScoreCache.getLevel(MaxScoreCache.java:74)
> at
> org.apache.lucene.search.ImpactsDISI.getMaxScore(ImpactsDISI.java:87)
> at
> org.apache.lucene.search.TermScorer.getMaxScore(TermScorer.java:85)
> at
> org.apache.lucene.search.join.ToChildBlockJoinQuery$ToChildBlockJoinScorer.getMaxScore(ToChildBlockJoinQuery.java:285)
> at
> org.apache.lucene.search.AssertingScorer.getMaxScore(AssertingScorer.java:94)
> at
> org.apache.lucene.search.BlockMaxConjunctionScorer.lambda$new$0(BlockMaxConjunctionScorer.java:51)
> at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> at java.util.TimSort.sort(TimSort.java:220)
> at java.util.Arrays.sort(Arrays.java:1438)
> at
> org.apache.lucene.search.BlockMaxConjunctionScorer.(BlockMaxConjunctionScorer.java:60)
> at
> org.apache.lucene.search.Boolean2ScorerSupplier.req(Boolean2ScorerSupplier.java:157)
> at
> org.apache.lucene.search.Boolean2ScorerSupplier.get(Boolean2ScorerSupplier.java:93)
> at
> org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:344)
> at org.apache.lucene.search.Weight.bulkScorer(Weight.java:177)
> at
> org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:326)
> at
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:88)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649)
> at
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:567)
> at
> org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:419)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:430)
> at
> org.apache.lucene.search.highlight.HighlighterTest.testToChildBlockJoinQuery(HighlighterTest.java:667)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
>  

[JENKINS] Lucene-Solr-Tests-master - Build # 2646 - Still Unstable

2018-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2646/

1 tests failed.
FAILED:  
org.apache.lucene.search.highlight.HighlighterTest.testToChildBlockJoinQuery

Error Message:
Cannot get impacts until the iterator is positioned or advanceShallow has been 
called

Stack Trace:
java.lang.AssertionError: Cannot get impacts until the iterator is positioned 
or advanceShallow has been called
at 
__randomizedtesting.SeedInfo.seed([ACC8B0D6AB636F1E:8A12765808A8DAF2]:0)
at 
org.apache.lucene.index.AssertingLeafReader$AssertingImpactsEnum.getImpacts(AssertingLeafReader.java:475)
at 
org.apache.lucene.search.MaxScoreCache.getLevel(MaxScoreCache.java:74)
at org.apache.lucene.search.ImpactsDISI.getMaxScore(ImpactsDISI.java:87)
at org.apache.lucene.search.TermScorer.getMaxScore(TermScorer.java:85)
at 
org.apache.lucene.search.join.ToChildBlockJoinQuery$ToChildBlockJoinScorer.getMaxScore(ToChildBlockJoinQuery.java:285)
at 
org.apache.lucene.search.AssertingScorer.getMaxScore(AssertingScorer.java:94)
at 
org.apache.lucene.search.BlockMaxConjunctionScorer.lambda$new$0(BlockMaxConjunctionScorer.java:51)
at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
at java.util.TimSort.sort(TimSort.java:220)
at java.util.Arrays.sort(Arrays.java:1438)
at 
org.apache.lucene.search.BlockMaxConjunctionScorer.(BlockMaxConjunctionScorer.java:60)
at 
org.apache.lucene.search.Boolean2ScorerSupplier.req(Boolean2ScorerSupplier.java:157)
at 
org.apache.lucene.search.Boolean2ScorerSupplier.get(Boolean2ScorerSupplier.java:93)
at org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:344)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:177)
at 
org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:326)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:88)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:567)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:419)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:430)
at 
org.apache.lucene.search.highlight.HighlighterTest.testToChildBlockJoinQuery(HighlighterTest.java:667)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 22576 - Unstable!

2018-08-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22576/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  
org.apache.lucene.search.highlight.HighlighterTest.testToChildBlockJoinQuery

Error Message:
Cannot get impacts until the iterator is positioned or advanceShallow has been 
called

Stack Trace:
java.lang.AssertionError: Cannot get impacts until the iterator is positioned 
or advanceShallow has been called
at 
__randomizedtesting.SeedInfo.seed([9EBB038DF4A9BF3F:B861C50357620AD3]:0)
at 
org.apache.lucene.index.AssertingLeafReader$AssertingImpactsEnum.getImpacts(AssertingLeafReader.java:475)
at 
org.apache.lucene.search.MaxScoreCache.getLevel(MaxScoreCache.java:74)
at org.apache.lucene.search.ImpactsDISI.getMaxScore(ImpactsDISI.java:87)
at org.apache.lucene.search.TermScorer.getMaxScore(TermScorer.java:85)
at 
org.apache.lucene.search.join.ToChildBlockJoinQuery$ToChildBlockJoinScorer.getMaxScore(ToChildBlockJoinQuery.java:285)
at 
org.apache.lucene.search.AssertingScorer.getMaxScore(AssertingScorer.java:94)
at 
org.apache.lucene.search.BlockMaxConjunctionScorer.lambda$new$0(BlockMaxConjunctionScorer.java:51)
at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
at java.util.TimSort.sort(TimSort.java:220)
at java.util.Arrays.sort(Arrays.java:1438)
at 
org.apache.lucene.search.BlockMaxConjunctionScorer.(BlockMaxConjunctionScorer.java:60)
at 
org.apache.lucene.search.Boolean2ScorerSupplier.req(Boolean2ScorerSupplier.java:157)
at 
org.apache.lucene.search.Boolean2ScorerSupplier.get(Boolean2ScorerSupplier.java:93)
at org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:344)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:177)
at 
org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:326)
at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:88)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:567)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:419)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:430)
at 
org.apache.lucene.search.highlight.HighlighterTest.testToChildBlockJoinQuery(HighlighterTest.java:667)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

  1   2   >