[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk1.8.0_162) - Build # 6 - Still Unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/6/
Java: 64bit/jdk1.8.0_162 -XX:-UseCompressedOops -XX:+UseParallelGC

35 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple1 null Live Nodes: [127.0.0.1:37383_solr, 
127.0.0.1:46825_solr] Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/7)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   "core":"testSimple1_shard1_replica_n1",   
"base_url":"https://127.0.0.1:38695/solr;,   
"node_name":"127.0.0.1:38695_solr",   "state":"down",   
"type":"NRT"}, "core_node5":{   
"core":"testSimple1_shard1_replica_n2",   
"base_url":"https://127.0.0.1:37383/solr;,   
"node_name":"127.0.0.1:37383_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node7":{   "core":"testSimple1_shard2_replica_n4",   
"base_url":"https://127.0.0.1:38695/solr;,   
"node_name":"127.0.0.1:38695_solr",   "state":"down",   
"type":"NRT"}, "core_node8":{   
"core":"testSimple1_shard2_replica_n6",   
"base_url":"https://127.0.0.1:37383/solr;,   
"node_name":"127.0.0.1:37383_solr",   "state":"active",   
"type":"NRT",   "leader":"true",   "router":{"name":"compositeId"}, 
  "maxShardsPerNode":"2",   "autoAddReplicas":"true",   "nrtReplicas":"2",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Waiting for collection testSimple1
null
Live Nodes: [127.0.0.1:37383_solr, 127.0.0.1:46825_solr]
Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/7)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"testSimple1_shard1_replica_n1",
  "base_url":"https://127.0.0.1:38695/solr;,
  "node_name":"127.0.0.1:38695_solr",
  "state":"down",
  "type":"NRT"},
"core_node5":{
  "core":"testSimple1_shard1_replica_n2",
  "base_url":"https://127.0.0.1:37383/solr;,
  "node_name":"127.0.0.1:37383_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node7":{
  "core":"testSimple1_shard2_replica_n4",
  "base_url":"https://127.0.0.1:38695/solr;,
  "node_name":"127.0.0.1:38695_solr",
  "state":"down",
  "type":"NRT"},
"core_node8":{
  "core":"testSimple1_shard2_replica_n6",
  "base_url":"https://127.0.0.1:37383/solr;,
  "node_name":"127.0.0.1:37383_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"true",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([901286DE2958861F:A8A1A2200EAB52CE]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple(AutoAddReplicasIntegrationTest.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1722 - Still unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1722/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
should be at least one inactive event

Stack Trace:
java.lang.AssertionError: should be at least one inactive event
at 
__randomizedtesting.SeedInfo.seed([C98EE163CFC77B06:D4A22111AE845C0D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup(ScheduledMaintenanceTriggerTest.java:218)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
missing cleanup event

Stack Trace:
java.lang.AssertionError: missing cleanup event
  

[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk-9.0.4) - Build # 6 - Still Unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/6/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

34 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple1 null Live Nodes: [127.0.0.1:37587_solr, 
127.0.0.1:41115_solr] Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/8)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   "core":"testSimple1_shard1_replica_n1",   
"base_url":"https://127.0.0.1:37587/solr;,   
"node_name":"127.0.0.1:37587_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node5":{   
"core":"testSimple1_shard1_replica_n2",   
"base_url":"https://127.0.0.1:38603/solr;,   
"node_name":"127.0.0.1:38603_solr",   "state":"down",   
"type":"NRT"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node7":{   
"core":"testSimple1_shard2_replica_n4",   
"base_url":"https://127.0.0.1:37587/solr;,   
"node_name":"127.0.0.1:37587_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node8":{   
"core":"testSimple1_shard2_replica_n6",   
"base_url":"https://127.0.0.1:38603/solr;,   
"node_name":"127.0.0.1:38603_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"2",  
 "autoAddReplicas":"true",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Waiting for collection testSimple1
null
Live Nodes: [127.0.0.1:37587_solr, 127.0.0.1:41115_solr]
Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/8)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"testSimple1_shard1_replica_n1",
  "base_url":"https://127.0.0.1:37587/solr;,
  "node_name":"127.0.0.1:37587_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node5":{
  "core":"testSimple1_shard1_replica_n2",
  "base_url":"https://127.0.0.1:38603/solr;,
  "node_name":"127.0.0.1:38603_solr",
  "state":"down",
  "type":"NRT"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node7":{
  "core":"testSimple1_shard2_replica_n4",
  "base_url":"https://127.0.0.1:37587/solr;,
  "node_name":"127.0.0.1:37587_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node8":{
  "core":"testSimple1_shard2_replica_n6",
  "base_url":"https://127.0.0.1:38603/solr;,
  "node_name":"127.0.0.1:38603_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"true",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([8E468A9B0C6E4AF4:B6F5AE652B9D9E25]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple(AutoAddReplicasIntegrationTest.java:94)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 

[JENKINS] Lucene-Solr-repro - Build # 229 - Unstable

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/229/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/9/consoleText

[repro] Revision: 78097d2098ef3a4dc6107feb5cbd66d61920a43d

[repro] Repro line:  ant test  -Dtestcase=TriggerIntegrationTest 
-Dtests.method=testEventQueue -Dtests.seed=A86D2A7D5E1CA120 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=en-MT -Dtests.timezone=America/Virgin -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=AtomicUpdateProcessorFactoryTest 
-Dtests.method=testMultipleThreads -Dtests.seed=A86D2A7D5E1CA120 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sr-CS -Dtests.timezone=Antarctica/Palmer -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestReplicationHandler 
-Dtests.method=doTestIndexFetchOnMasterRestart -Dtests.seed=A86D2A7D5E1CA120 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ko-KR -Dtests.timezone=Africa/Luanda -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestJmxIntegration 
-Dtests.method=testJmxOnCoreReload -Dtests.seed=A86D2A7D5E1CA120 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=it-CH -Dtests.timezone=Africa/Algiers -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=A86D2A7D5E1CA120 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=fi-FI -Dtests.timezone=Antarctica/Palmer -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.seed=A86D2A7D5E1CA120 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ar-EG -Dtests.timezone=America/Matamoros 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestLTRReRankingPipeline 
-Dtests.method=testDifferentTopN -Dtests.seed=726283AAA62ED3B7 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=es-MX -Dtests.timezone=America/Mexico_City -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
2eeed51cdf006bdee7dec87b6adf144e7cc0d56e
[repro] git fetch
[repro] git checkout 78097d2098ef3a4dc6107feb5cbd66d61920a43d

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ScheduledMaintenanceTriggerTest
[repro]   AtomicUpdateProcessorFactoryTest
[repro]   TestJmxIntegration
[repro]   TestReplicationHandler
[repro]   TestLargeCluster
[repro]   TriggerIntegrationTest
[repro]solr/contrib/ltr
[repro]   TestLTRReRankingPipeline
[repro] ant compile-test

[...truncated 3292 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=30 
-Dtests.class="*.ScheduledMaintenanceTriggerTest|*.AtomicUpdateProcessorFactoryTest|*.TestJmxIntegration|*.TestReplicationHandler|*.TestLargeCluster|*.TriggerIntegrationTest"
 -Dtests.showOutput=onerror  -Dtests.seed=A86D2A7D5E1CA120 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fi-FI 
-Dtests.timezone=Antarctica/Palmer -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 61051 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 566 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLTRReRankingPipeline" -Dtests.showOutput=onerror  
-Dtests.seed=726283AAA62ED3B7 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=es-MX 
-Dtests.timezone=America/Mexico_City -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 135 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.core.TestJmxIntegration
[repro]   1/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro]   3/5 failed: org.apache.solr.handler.TestReplicationHandler
[repro]   5/5 failed: org.apache.solr.ltr.TestLTRReRankingPipeline

[repro] Re-testing 100% failures at the tip of master
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/contrib/ltr
[repro]   TestLTRReRankingPipeline
[repro] ant compile-test

[...truncated 2563 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_162) - Build # 21607 - Unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21607/
Java: 64bit/jdk1.8.0_162 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:43463/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:39567/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:43463/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:39567/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([B73FC48CFD2B6CAF:1DF2177E4AF8B97F]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:991)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393941#comment-16393941
 ] 

Cao Manh Dat commented on SOLR-12051:
-

Thanks [~shalinmangar], [~varunthacker]

> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 494 - Failure!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/494/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexFileDeleter

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexFileDeleter_12B49D999A749E8F-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexFileDeleter_12B49D999A749E8F-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexFileDeleter_12B49D999A749E8F-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexFileDeleter_12B49D999A749E8F-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexFileDeleter_12B49D999A749E8F-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexFileDeleter_12B49D999A749E8F-001\tempDir-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexFileDeleter_12B49D999A749E8F-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestIndexFileDeleter_12B49D999A749E8F-001

at __randomizedtesting.SeedInfo.seed([12B49D999A749E8F]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Error from server at http://127.0.0.1:63933//collection1: 
java.lang.NullPointerException  at 
org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:38)
  at 
org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:579)
  at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:562)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:423)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)  
at 

[jira] [Commented] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393928#comment-16393928
 ] 

Shalin Shekhar Mangar commented on SOLR-12051:
--

Thanks Varun. I pushed fixes to both branches.

> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12011) Consistence problem when in-sync replicas are DOWN

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393925#comment-16393925
 ] 

ASF subversion and git services commented on SOLR-12011:


Commit 40660ade9d296bacae4b7a2e23364da8aeae7b35 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=40660ad ]

SOLR-12011: Remove unused imports

(cherry picked from commit e47bf8b)


> Consistence problem when in-sync replicas are DOWN
> --
>
> Key: SOLR-12011
> URL: https://issues.apache.org/jira/browse/SOLR-12011
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12011.patch, SOLR-12011.patch, SOLR-12011.patch, 
> SOLR-12011.patch, SOLR-12011.patch
>
>
> Currently, we will meet consistency problem when in-sync replicas are DOWN. 
> For example:
>  1. A collection with 1 shard with 1 leader and 2 replicas
>  2. Nodes contain 2 replicas go down
>  3. The leader receives an update A, success
>  4. The node contains the leader goes down
>  5. 2 replicas come back
>  6. One of them become leader --> But they shouldn't become leader since they 
> missed the update A
> A solution to this issue :
>  * The idea here is using term value of each replica (SOLR-11702) will be 
> enough to tell that a replica received the latest updates or not. Therefore 
> only replicas with the highest term can become the leader.
>  * There are a couple of things need to be done on this issue
>  ** When leader receives the first updates, its term should be changed from 0 
> -> 1, so further replicas added to the same shard won't be able to become 
> leader (their term = 0) until they finish recovery
>  ** For DOWN replicas, the leader should also need to check (in DUP.finish()) 
> that those replicas have term less than leader before return results to users
>  ** Just by looking at term value of replica, it is not enough to tell us 
> that replica is in-sync with leader or not. Because that replica might not 
> finish the recovery process. We need to introduce another flag (stored on 
> shard term node on ZK) to tell us that replica finished recovery or not. It 
> will look like this.
>  *** {"code_node1" : 1, "core_node2" : 0} — (when core_node2 start recovery) 
> --->
>  *** {"core_node1" : 1, "core_node2" : 1, "core_node2_recovering" : 1} — 
> (when core_node2 finish recovery) --->
>  *** {"core_node1" : 1, "core_node2" : 1}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12077) Admin UI -- support autoAddReplicas during collection creation

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-12077.
--
Resolution: Fixed

> Admin UI -- support autoAddReplicas during collection creation
> --
>
> Key: SOLR-12077
> URL: https://issues.apache.org/jira/browse/SOLR-12077
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12077.patch
>
>
> We should add the autoAddReplicas parameter in the advanced option of 
> collection creation dialogue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12077) Admin UI -- support autoAddReplicas during collection creation

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393926#comment-16393926
 ] 

ASF subversion and git services commented on SOLR-12077:


Commit 9341be83701cf5d3675e9cde85da9ccb97044521 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9341be8 ]

SOLR-12077: Add support for autoAddReplicas in the collection creation dialog 
in Admin UI

(cherry picked from commit 2eeed51)


> Admin UI -- support autoAddReplicas during collection creation
> --
>
> Key: SOLR-12077
> URL: https://issues.apache.org/jira/browse/SOLR-12077
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12077.patch
>
>
> We should add the autoAddReplicas parameter in the advanced option of 
> collection creation dialogue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12077) Admin UI -- support autoAddReplicas during collection creation

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393922#comment-16393922
 ] 

ASF subversion and git services commented on SOLR-12077:


Commit 2eeed51cdf006bdee7dec87b6adf144e7cc0d56e in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2eeed51 ]

SOLR-12077: Add support for autoAddReplicas in the collection creation dialog 
in Admin UI


> Admin UI -- support autoAddReplicas during collection creation
> --
>
> Key: SOLR-12077
> URL: https://issues.apache.org/jira/browse/SOLR-12077
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12077.patch
>
>
> We should add the autoAddReplicas parameter in the advanced option of 
> collection creation dialogue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12077) Admin UI -- support autoAddReplicas during collection creation

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-12077:
-
Attachment: SOLR-12077.patch

> Admin UI -- support autoAddReplicas during collection creation
> --
>
> Key: SOLR-12077
> URL: https://issues.apache.org/jira/browse/SOLR-12077
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12077.patch
>
>
> We should add the autoAddReplicas parameter in the advanced option of 
> collection creation dialogue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12077) Admin UI -- support autoAddReplicas during collection creation

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12077:


 Summary: Admin UI -- support autoAddReplicas during collection 
creation
 Key: SOLR-12077
 URL: https://issues.apache.org/jira/browse/SOLR-12077
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI, AutoScaling, SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 7.3, master (8.0)


We should add the autoAddReplicas parameter in the advanced option of 
collection creation dialogue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_162) - Build # 1500 - Failure!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1500/
Java: 64bit/jdk1.8.0_162 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 60859 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj2001416571
 [ecj-lint] Compiling 874 source files to /tmp/ecj2001416571
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 23)
 [ecj-lint] import java.net.URL;
 [ecj-lint]
 [ecj-lint] The import java.net.URL is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 3 problems (2 errors, 1 warning)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:618: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build.xml:682: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2088: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2121: 
Compile failed; see the compiler error output for details.

Total time: 80 minutes 50 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12011) Consistence problem when in-sync replicas are DOWN

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393893#comment-16393893
 ] 

ASF subversion and git services commented on SOLR-12011:


Commit e47bf8b63aa732c884091f48ffe5b467c94e590c in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e47bf8b ]

SOLR-12011: Remove unused imports


> Consistence problem when in-sync replicas are DOWN
> --
>
> Key: SOLR-12011
> URL: https://issues.apache.org/jira/browse/SOLR-12011
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12011.patch, SOLR-12011.patch, SOLR-12011.patch, 
> SOLR-12011.patch, SOLR-12011.patch
>
>
> Currently, we will meet consistency problem when in-sync replicas are DOWN. 
> For example:
>  1. A collection with 1 shard with 1 leader and 2 replicas
>  2. Nodes contain 2 replicas go down
>  3. The leader receives an update A, success
>  4. The node contains the leader goes down
>  5. 2 replicas come back
>  6. One of them become leader --> But they shouldn't become leader since they 
> missed the update A
> A solution to this issue :
>  * The idea here is using term value of each replica (SOLR-11702) will be 
> enough to tell that a replica received the latest updates or not. Therefore 
> only replicas with the highest term can become the leader.
>  * There are a couple of things need to be done on this issue
>  ** When leader receives the first updates, its term should be changed from 0 
> -> 1, so further replicas added to the same shard won't be able to become 
> leader (their term = 0) until they finish recovery
>  ** For DOWN replicas, the leader should also need to check (in DUP.finish()) 
> that those replicas have term less than leader before return results to users
>  ** Just by looking at term value of replica, it is not enough to tell us 
> that replica is in-sync with leader or not. Because that replica might not 
> finish the recovery process. We need to introduce another flag (stored on 
> shard term node on ZK) to tell us that replica finished recovery or not. It 
> will look like this.
>  *** {"code_node1" : 1, "core_node2" : 0} — (when core_node2 start recovery) 
> --->
>  *** {"core_node1" : 1, "core_node2" : 1, "core_node2_recovering" : 1} — 
> (when core_node2 finish recovery) --->
>  *** {"core_node1" : 1, "core_node2" : 1}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393888#comment-16393888
 ] 

Varun Thacker commented on SOLR-12051:
--

This might have broken precommit because of an unused import

> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-12067.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.3

Thanks Varun and Mark.

> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393865#comment-16393865
 ] 

ASF subversion and git services commented on SOLR-12067:


Commit 16c57501a96aa8ecb77e88f81f044df8dc0add60 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=16c5750 ]

SOLR-12067: Increase autoAddReplicas default 30 second wait time to 120 seconds

(cherry picked from commit f0d46ea)


> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393862#comment-16393862
 ] 

ASF subversion and git services commented on SOLR-12067:


Commit f0d46ead45dbdd40540db958a621b0a583f6f9e8 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f0d46ea ]

SOLR-12067: Increase autoAddReplicas default 30 second wait time to 120 seconds


> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 9 - Failure

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/9/

8 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
cleanup action didn't run

Stack Trace:
java.lang.AssertionError: cleanup action didn't run
at 
__randomizedtesting.SeedInfo.seed([A86D2A7D5E1CA120:B541EA0F3F5F862B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup(ScheduledMaintenanceTriggerTest.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 
__randomizedtesting.SeedInfo.seed([A86D2A7D5E1CA120:61D868D3577B67D5]:0)
at 

[jira] [Commented] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393858#comment-16393858
 ] 

ASF subversion and git services commented on SOLR-12051:


Commit 4abdb24667f28777be512047bb012a7346d8039b in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4abdb24 ]

SOLR-12051: Adding error log in case of data loss


> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393857#comment-16393857
 ] 

ASF subversion and git services commented on SOLR-12051:


Commit 05d4a9320cfe95d93655feb39a6c6a2945e98c76 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=05d4a93 ]

SOLR-12051: Adding error log in case of data loss


> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12066) Autoscaling move replica can cause core initialization failure on the original JVM

2018-03-09 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393840#comment-16393840
 ] 

Varun Thacker edited comment on SOLR-12066 at 3/10/18 12:13 AM:


Here is another scenario where this happens which doesn't need autoscaling
 * Start a 2 node cluster
 * Create a 1 shard X 2 replica collection
 * Stop node2
 * Call delete replica for the replica in node2 . At this point the state.json 
will remove the entry for replica2 but the local index will still exisit
 * Start node2 . You'll get a core initialization failure.


was (Author: varunthacker):
Here is another scenario where this happens

 
 * Start a 2 node cluster
 * Create a 1 shard X 2 replica collection
 * Stop node2
 * Call delete replica for the replica in node2 . At this point the state.json 
will remove the entry for replica2 but the local index will still exisit
 * Start node2 . You'll get a core initialization failure.

> Autoscaling move replica can cause core initialization failure on the 
> original JVM
> --
>
> Key: SOLR-12066
> URL: https://issues.apache.org/jira/browse/SOLR-12066
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> Initially when SOLR-12047 was created it looked like waiting for a state in 
> ZK for only 3 seconds was the culprit for cores not loading up
>  
> But it turns out to be something else. Here are the steps to reproduce this 
> problem
>  
>  - create a 3 node cluster
>  - create a 1 shard X 2 replica collection to use node1 and node2 ( 
> [http://localhost:8983/solr/admin/collections?action=create=test_node_lost=1=2=true]
>  )
>  - stop node 2 : ./bin/solr stop -p 7574
>  - Solr will create a new replica on node3 after 30 seconds because of the 
> ".auto_add_replicas" trigger
>  - At this point state.json has info about replicas being on node1 and node3
>  - Start node2. Bam!
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1053)
> ...
> Caused by: org.apache.solr.common.SolrException: 
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1619)
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1030)
> ...
> Caused by: org.apache.solr.common.SolrException: coreNodeName core_node4 does 
> not exist in shard shard1: 
> DocCollection(test_node_lost//collections/test_node_lost/state.json/12)={
> ...{code}
>  
> The practical effects of this is not big since the move replica has already 
> put the replica on another JVM . But to the user it's super confusing on 
> what's happening. He can never get rid of this error unless he manually 
> cleans up the data directory on node2 and restart
>  
> Please note: I chose autoAddReplicas=true to reproduce this. but a user could 
> be using a node lost trigger and and run into the same issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393848#comment-16393848
 ] 

Mark Miller commented on SOLR-12067:


I have no problem with a higher default.

> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393846#comment-16393846
 ] 

Shalin Shekhar Mangar commented on SOLR-12067:
--

[~markrmil...@gmail.com] -- any objections to raising default timeout to 2 
minutes?

> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12066) Autoscaling move replica can cause core initialization failure on the original JVM

2018-03-09 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393840#comment-16393840
 ] 

Varun Thacker commented on SOLR-12066:
--

Here is another scenario where this happens

 
 * Start a 2 node cluster
 * Create a 1 shard X 2 replica collection
 * Stop node2
 * Call delete replica for the replica in node2 . At this point the state.json 
will remove the entry for replica2 but the local index will still exisit
 * Start node2 . You'll get a core initialization failure.

> Autoscaling move replica can cause core initialization failure on the 
> original JVM
> --
>
> Key: SOLR-12066
> URL: https://issues.apache.org/jira/browse/SOLR-12066
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> Initially when SOLR-12047 was created it looked like waiting for a state in 
> ZK for only 3 seconds was the culprit for cores not loading up
>  
> But it turns out to be something else. Here are the steps to reproduce this 
> problem
>  
>  - create a 3 node cluster
>  - create a 1 shard X 2 replica collection to use node1 and node2 ( 
> [http://localhost:8983/solr/admin/collections?action=create=test_node_lost=1=2=true]
>  )
>  - stop node 2 : ./bin/solr stop -p 7574
>  - Solr will create a new replica on node3 after 30 seconds because of the 
> ".auto_add_replicas" trigger
>  - At this point state.json has info about replicas being on node1 and node3
>  - Start node2. Bam!
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> ...
> Caused by: org.apache.solr.common.SolrException: Unable to create core 
> [test_node_lost_shard1_replica_n2]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1053)
> ...
> Caused by: org.apache.solr.common.SolrException: 
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1619)
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1030)
> ...
> Caused by: org.apache.solr.common.SolrException: coreNodeName core_node4 does 
> not exist in shard shard1: 
> DocCollection(test_node_lost//collections/test_node_lost/state.json/12)={
> ...{code}
>  
> The practical effects of this is not big since the move replica has already 
> put the replica on another JVM . But to the user it's super confusing on 
> what's happening. He can never get rid of this error unless he manually 
> cleans up the data directory on node2 and restart
>  
> Please note: I chose autoAddReplicas=true to reproduce this. but a user could 
> be using a node lost trigger and and run into the same issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7212 - Still unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7212/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

10 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestSimpleFSDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_1E4EDE2630BC3C07-001\testCopyBytesWithThreads-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_1E4EDE2630BC3C07-001\testCopyBytesWithThreads-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_1E4EDE2630BC3C07-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_1E4EDE2630BC3C07-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_1E4EDE2630BC3C07-001\testCopyBytesWithThreads-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_1E4EDE2630BC3C07-001\testCopyBytesWithThreads-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_1E4EDE2630BC3C07-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_1E4EDE2630BC3C07-001

at __randomizedtesting.SeedInfo.seed([1E4EDE2630BC3C07]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest.testConsistencyOnExceptions

Error Message:
Captured an uncaught exception in thread: Thread[id=23, 
name=ReplicationThread-indexAndTaxo, state=RUNNABLE, 
group=TGRP-IndexAndTaxonomyReplicationClientTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=23, name=ReplicationThread-indexAndTaxo, 
state=RUNNABLE, group=TGRP-IndexAndTaxonomyReplicationClientTest]
at 
__randomizedtesting.SeedInfo.seed([EE5D813DDE6885D6:61D3669DCC047629]:0)
Caused by: java.lang.AssertionError: handler failed too many times: -1
at __randomizedtesting.SeedInfo.seed([EE5D813DDE6885D6]:0)
at 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest$4.handleUpdateException(IndexAndTaxonomyReplicationClientTest.java:422)
at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.mockfile.TestHandleTrackingFS

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J1\temp\lucene.mockfile.TestHandleTrackingFS_FA43152B697D3172-001\tempDir-005:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J1\temp\lucene.mockfile.TestHandleTrackingFS_FA43152B697D3172-001\tempDir-005

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J1\temp\lucene.mockfile.TestHandleTrackingFS_FA43152B697D3172-001:
 java.nio.file.DirectoryNotEmptyException: 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_162) - Build # 21606 - Failure!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21606/
Java: 64bit/jdk1.8.0_162 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 60698 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj399487794
 [ecj-lint] Compiling 877 source files to /tmp/ecj399487794
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 23)
 [ecj-lint] import java.net.URL;
 [ecj-lint]
 [ecj-lint] The import java.net.URL is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 3 problems (2 errors, 1 warning)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:618: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build.xml:682: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2088: 
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2121: 
Compile failed; see the compiler error output for details.

Total time: 68 minutes 44 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-10512) Innerjoin streaming expressions - Invalid JoinStream error

2018-03-09 Thread Markus Kalkbrenner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393766#comment-16393766
 ] 

Markus Kalkbrenner commented on SOLR-10512:
---

In fact we had rather sophisticated stuff in stream A and stream B.

Meanwhile I found out what gives us a reliable result. FieldA has to be on the 
left, fieldB on the right, but you need to ensure that both streams are 
properly sorted!

So for whatever is in "search()", this works:

{{innerJoin(}}
 {{  sort(search(A), by="fieldA"),}}
 {{  sort(search(B), by="fieldB"),}}
 {{  on="fieldA=fieldB"}}
 {{)}}

 

> Innerjoin streaming expressions - Invalid JoinStream error
> --
>
> Key: SOLR-10512
> URL: https://issues.apache.org/jira/browse/SOLR-10512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.4.2, 6.5
> Environment: Debian Jessie
>Reporter: Dominique Béjean
>Priority: Major
>
> It looks like innerJoin streaming expression do not work as explained in 
> documentation. An invalid JoinStream error occurs.
> {noformat}
> curl --data-urlencode 'expr=innerJoin(
> search(books, 
>q="*:*", 
>fl="id", 
>sort="id asc"),
> searchreviews, 
>q="*:*", 
>fl="id_book_s", 
>sort="id_book_s asc"), 
> on="id=id_books_s"
> )' http://localhost:8983/solr/books/stream
>   
> {"result-set":{"docs":[{"EXCEPTION":"Invalid JoinStream - all incoming stream 
> comparators (sort) must be a superset of this stream's 
> equalitor.","EOF":true}]}}   
> {noformat}
> It is tottaly similar to the documentation example
> 
> {noformat}
> innerJoin(
>   search(people, q=*:*, fl="personId,name", sort="personId asc"),
>   search(pets, q=type:cat, fl="ownerId,petName", sort="ownerId asc"),
>   on="personId=ownerId"
> )
> {noformat}
> Queries on each collection give :
> {noformat}
> $ curl --data-urlencode 'expr=search(books, 
>q="*:*", 
>fl="id, title_s, pubyear_i", 
>sort="pubyear_i asc", 
>qt="/export")' 
> http://localhost:8983/solr/books/stream
> {
>   "result-set": {
> "docs": [
>   {
> "title_s": "Friends",
> "pubyear_i": 1994,
> "id": "book2"
>   },
>   {
> "title_s": "The Way of Kings",
> "pubyear_i": 2010,
> "id": "book1"
>   },
>   {
> "EOF": true,
> "RESPONSE_TIME": 16
>   }
> ]
>   }
> }
> $ curl --data-urlencode 'expr=search(reviews, 
>q="author_s:d*", 
>fl="id, id_book_s, stars_i, review_dt", 
>sort="id_book_s asc", 
>qt="/export")' 
> http://localhost:8983/solr/reviews/stream
>  
> {
>   "result-set": {
> "docs": [
>   {
> "stars_i": 3,
> "id": "book1_c2",
> "id_book_s": "book1",
> "review_dt": "2014-03-15T12:00:00Z"
>   },
>   {
> "stars_i": 4,
> "id": "book1_c3",
> "id_book_s": "book1",
> "review_dt": "2014-12-15T12:00:00Z"
>   },
>   {
> "stars_i": 3,
> "id": "book2_c2",
> "id_book_s": "book2",
> "review_dt": "1994-03-15T12:00:00Z"
>   },
>   {
> "stars_i": 4,
> "id": "book2_c3",
> "id_book_s": "book2",
> "review_dt": "1994-12-15T12:00:00Z"
>   },
>   {
> "EOF": true,
> "RESPONSE_TIME": 47
>   }
> ]
>   }
> }
> {noformat}
> After more tests, I just had to invert the "on" clause to make it work
> {noformat}
> curl --data-urlencode 'expr=innerJoin(
> search(books, 
>q="*:*", 
>fl="id", 
>sort="id asc"),
> searchreviews, 
>q="*:*", 
>fl="id_book_s", 
>sort="id_book_s asc"), 
> on="id_books_s=id"
> )' http://localhost:8983/solr/books/stream
> 
> {
>   

[jira] [Commented] (SOLR-11049) Solr in cloud mode silently fails uploading a big LTR model

2018-03-09 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393741#comment-16393741
 ] 

Shawn Heisey commented on SOLR-11049:
-

Just became aware of this issue due to the mailing list.

It's awesome that there's a workaround, and it does look like the reference 
guide was updated.

But anytime somebody performs an action and it doesn't work, Solr should not 
return a success status (0), and there should be at least one log entry 
explaining what went wrong.

Separately: Do we need to be worried about the fact that the failed upload took 
24 seconds?  I'm guessing that there was at least one timeout involved with 
this.  I would have expected ZK to reject the upload quite quickly, and to do 
it in a way that Solr *can* detect as an error.  It would be good to figure out 
whether it was Solr or ZK that misbehaved here.

> Solr in cloud mode silently fails uploading a big LTR model
> ---
>
> Key: SOLR-11049
> URL: https://issues.apache.org/jira/browse/SOLR-11049
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
> Environment: tested with Solr 6.6 an integrated zookeeper
>Reporter: Stefan Langenmaier
>Priority: Major
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11049.patch
>
>
> Hi,
> I'm using Solr in cloud mode, I have a MultipleAdditiveTreesModel with about 
> 3MB in size. When I upload the model with
> {noformat}
> curl -v -XPUT 'http://localhost:8983/solr/tmdb/schema/model-store' 
> --data-binary @/big-tree.model -H 'Content-type:application/json'
> {noformat}
> I get the following response
> {code:html}
> {
>   "responseHeader":{
> "status":0,
> "QTime":24318}
> }
> {code}
> This looks kind of slow but without an error. When I check the config the 
> model is not visible and when I try to run a query that uses the model I get 
> the following error
> {code:html}
> "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"cannot find model bigTreeModel",
> "code":400}
> {code}
> When I upload the model to solr where I increased the zookeeper znode size 
> limit with
> {noformat}
> -Djute.maxbuffer=0x1ff
> {noformat}
> the same model upload succeeds much faster
> {code:html}
> {
>   "responseHeader":{
> "status":0,
> "QTime":689}
> }
> {code}
> The model is visible in the configuration and queries that use it run without 
> error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393740#comment-16393740
 ] 

Shalin Shekhar Mangar commented on SOLR-12067:
--

Ah, sorry, I was confused about autoReplicaFailoverBadNodeExpiration. That was 
only used to expire entries from the bad nodes cache and had nothing to do with 
how soon replicas are moved. Still, I think being conservative here is not a 
bad idea.

> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-12067:
-
Attachment: SOLR-12067.patch

> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393730#comment-16393730
 ] 

Shalin Shekhar Mangar commented on SOLR-12067:
--

Thanks Varun. I agree 30 seconds is less. Actually, I found that with HDFS the 
timeout was autoReplicaFailoverBadNodeExpiration (default 60s) + 
autoReplicaFailoverWaitAfterExpiration (default 30s). We deprecated 
autoReplicaFailoverBadNodeExpiration value but did not add it to the default 
autoReplicaFailoverWaitAfterExpiration. So the timeout should be 90 seconds at 
least. I think we should be conservative here and set this to a higher value, 
say 120s.

> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-12067:


Assignee: Shalin Shekhar Mangar

> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 974 - Still Failing

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/974/

No tests ran.

Build Log:
[...truncated 30082 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 230 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (12.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.3 MB in 0.04 sec (747.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.3 MB in 0.10 sec (731.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.8 MB in 0.11 sec (733.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6253 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6253 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6253 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6253 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (79.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 53.4 MB in 0.64 sec (83.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 154.5 MB in 0.69 sec (225.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 155.6 MB in 0.99 sec (157.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 

[jira] [Commented] (SOLR-12063) Fix tlog entry indexes in UpdateLog for CDCR to work smoothly.

2018-03-09 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393709#comment-16393709
 ] 

Varun Thacker commented on SOLR-12063:
--

{quote}Somewhere in the test case we are printing all the data in zookeeper. 
This happens multiple times and fills the console. Can we figure out where is 
this happening and fix it?
{quote}
Created SOLR-12076

> Fix tlog entry indexes in UpdateLog for CDCR to work smoothly.
> --
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> In *UpdateLog*, {{RecentUpdates}} reads the entry of tlogs, and throughout 
> the project the entry indexes for various operations are consistent, but odd 
> in this part. As we included new entry in TransactionLog for CDCR, read 
> operations in {{update()}} method of {{RecentUpdates}} throw error rightfully 
> as elements are read from wrong indexes of tlog entry. The entry indexes of 
> llog should be consistent throughout.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12076) Remove more usages of printLayout in CDCR tests

2018-03-09 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393708#comment-16393708
 ] 

Varun Thacker commented on SOLR-12076:
--

Simple patch. I'll run tests and preccommit and then commit this shortly.

> Remove more usages of printLayout in CDCR tests
> ---
>
> Key: SOLR-12076
> URL: https://issues.apache.org/jira/browse/SOLR-12076
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12076.patch
>
>
> All the CDCR tests simply print everything stored in ZooKeeper when we start 
> the servers. 
> It adds no value in my option and simply generates noise.
> In general we should remove printLayoutToStdOut  which prints everything and 
> pass a parameter to print only a particular set of znodes which they care 
> about. For example if the leader election tests fail print everything related 
> to that collection and not print everything including the configs.
> It's also a public API so I don't want to tackle this in the interest of 
> time. I plan on specifically tackling the usage in CDCR tests and removing 
> them. SOLR-6090 is also related for reference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12076) Remove more usages of printLayout in CDCR tests

2018-03-09 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12076:
-
Attachment: SOLR-12076.patch

> Remove more usages of printLayout in CDCR tests
> ---
>
> Key: SOLR-12076
> URL: https://issues.apache.org/jira/browse/SOLR-12076
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12076.patch
>
>
> All the CDCR tests simply print everything stored in ZooKeeper when we start 
> the servers. 
> It adds no value in my option and simply generates noise.
> In general we should remove printLayoutToStdOut  which prints everything and 
> pass a parameter to print only a particular set of znodes which they care 
> about. For example if the leader election tests fail print everything related 
> to that collection and not print everything including the configs.
> It's also a public API so I don't want to tackle this in the interest of 
> time. I plan on specifically tackling the usage in CDCR tests and removing 
> them. SOLR-6090 is also related for reference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12076) Remove more usages of printLayout in CDCR tests

2018-03-09 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12076:


 Summary: Remove more usages of printLayout in CDCR tests
 Key: SOLR-12076
 URL: https://issues.apache.org/jira/browse/SOLR-12076
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


All the CDCR tests simply print everything stored in ZooKeeper when we start 
the servers. 

It adds no value in my option and simply generates noise.

In general we should remove printLayoutToStdOut  which prints everything and 
pass a parameter to print only a particular set of znodes which they care 
about. For example if the leader election tests fail print everything related 
to that collection and not print everything including the configs.

It's also a public API so I don't want to tackle this in the interest of time. 
I plan on specifically tackling the usage in CDCR tests and removing them. 
SOLR-6090 is also related for reference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 482 - Failure!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/482/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 60794 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /var/tmp/ecj1048831956
 [ecj-lint] Compiling 874 source files to /var/tmp/ecj1048831956
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 23)
 [ecj-lint] import java.net.URL;
 [ecj-lint]
 [ecj-lint] The import java.net.URL is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 3 problems (2 errors, 1 warning)

BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/build.xml:618: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/build.xml:101: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build.xml:682: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/common-build.xml:2088:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/lucene/common-build.xml:2121:
 Compile failed; see the compiler error output for details.

Total time: 94 minutes 13 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-03-09 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393600#comment-16393600
 ] 

Gus Heck commented on SOLR-7896:


It should take special configuration to make the auth schemes diverge I think. 
That seems like the corner case and unified auth management would be the core 
use case IMHO. By default all one scheme for all urls, if further configured 
secondary schemes per URL path... 

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Priority: Major
>  Labels: authentication, login, password
>
> Out of the box, the Solr Administrative interface should require a password 
> that the user is required to set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 227 - Still Unstable

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/227/

[...truncated 32 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/488/consoleText

[repro] Revision: 3d805dea8b9f97743ba46e71381cdbd0e1350cc4

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testAddNode -Dtests.seed=1B6932A5197F72D1 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ru -Dtests.timezone=Asia/Oral 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testBasic -Dtests.seed=1B6932A5197F72D1 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ru -Dtests.timezone=Asia/Oral 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
78097d2098ef3a4dc6107feb5cbd66d61920a43d
[repro] git fetch
[repro] git checkout 3d805dea8b9f97743ba46e71381cdbd0e1350cc4

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestLargeCluster
[repro] ant compile-test

[...truncated 3310 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLargeCluster" -Dtests.showOutput=onerror  
-Dtests.seed=1B6932A5197F72D1 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ru -Dtests.timezone=Asia/Oral -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 8698 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro] git checkout 78097d2098ef3a4dc6107feb5cbd66d61920a43d

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 489 - Failure

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/489/

All tests passed

Build Log:
[...truncated 60816 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj411343891
 [ecj-lint] Compiling 874 source files to /tmp/ecj411343891
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 23)
 [ecj-lint] import java.net.URL;
 [ecj-lint]
 [ecj-lint] The import java.net.URL is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 3 problems (2 errors, 1 warning)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:618: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:101: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build.xml:682: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2088:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2121:
 Compile failed; see the compiler error output for details.

Total time: 85 minutes 21 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12075) TestLargeCluster is too flaky

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393493#comment-16393493
 ] 

ASF subversion and git services commented on SOLR-12075:


Commit 4d15ad1cb6274b855504340ad8811bce6688a0df in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4d15ad1 ]

SOLR-12075: BadApple TestLargeCluster until the issues can be resolved.


> TestLargeCluster is too flaky
> -
>
> Key: SOLR-12075
> URL: https://issues.apache.org/jira/browse/SOLR-12075
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> This test is failing a lot in jenkins builds, with two types of failures:
>  * specific test method failures - this may be caused by either bugs in the 
> autoscaling code, bugs in the simulator or timing issues. It should be 
> possible to narrow down the cause by using different speeds of simulated time.
>  * suite-level failures due to leaked threads - most of these failures 
> indicate the ongoing Policy calculations, eg:
> {code}
> com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from 
> SUITE scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
>   1) Thread[id=21406, name=AutoscalingActionExecutor-7277-thread-1, 
> state=RUNNABLE, group=TGRP-TestLargeCluster]
>at java.util.ArrayList.iterator(ArrayList.java:834)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:131)
>at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:110)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
>at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74)
>at org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$466/1757323495.apply(Unknown
>  Source)
>at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
>at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:298)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:287)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.removeReplica(Row.java:156)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:60)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
>at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
>at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
>at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$439/951218654.run(Unknown
>  Source)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1677458082.run(Unknown
>  Source)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at java.lang.Thread.run(Thread.java:748)
>   at __randomizedtesting.SeedInfo.seed([C6FA0364D13DAFCC]:0)
> {code}
> It's possible that somewhere an InterruptedException is caught and not 
> propagated so that the Policy calculations don't terminate when the thread is 
> interrupted 

[jira] [Commented] (SOLR-12075) TestLargeCluster is too flaky

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393492#comment-16393492
 ] 

ASF subversion and git services commented on SOLR-12075:


Commit 78097d2098ef3a4dc6107feb5cbd66d61920a43d in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=78097d2 ]

SOLR-12075: BadApple TestLargeCluster until the issues can be resolved.


> TestLargeCluster is too flaky
> -
>
> Key: SOLR-12075
> URL: https://issues.apache.org/jira/browse/SOLR-12075
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> This test is failing a lot in jenkins builds, with two types of failures:
>  * specific test method failures - this may be caused by either bugs in the 
> autoscaling code, bugs in the simulator or timing issues. It should be 
> possible to narrow down the cause by using different speeds of simulated time.
>  * suite-level failures due to leaked threads - most of these failures 
> indicate the ongoing Policy calculations, eg:
> {code}
> com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from 
> SUITE scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
>   1) Thread[id=21406, name=AutoscalingActionExecutor-7277-thread-1, 
> state=RUNNABLE, group=TGRP-TestLargeCluster]
>at java.util.ArrayList.iterator(ArrayList.java:834)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:131)
>at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:110)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
>at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
>at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74)
>at org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$466/1757323495.apply(Unknown
>  Source)
>at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
>at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:298)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:287)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Row.removeReplica(Row.java:156)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:60)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
>at 
> org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
>at 
> org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
>at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
>at 
> org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$439/951218654.run(Unknown
>  Source)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1677458082.run(Unknown
>  Source)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at java.lang.Thread.run(Thread.java:748)
>   at __randomizedtesting.SeedInfo.seed([C6FA0364D13DAFCC]:0)
> {code}
> It's possible that somewhere an InterruptedException is caught and not 
> propagated so that the Policy calculations don't terminate when the thread is 
> interrupted when 

[jira] [Created] (SOLR-12075) TestLargeCluster is too flaky

2018-03-09 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-12075:


 Summary: TestLargeCluster is too flaky
 Key: SOLR-12075
 URL: https://issues.apache.org/jira/browse/SOLR-12075
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Andrzej Bialecki 
Assignee: Andrzej Bialecki 


This test is failing a lot in jenkins builds, with two types of failures:
 * specific test method failures - this may be caused by either bugs in the 
autoscaling code, bugs in the simulator or timing issues. It should be possible 
to narrow down the cause by using different speeds of simulated time.
 * suite-level failures due to leaked threads - most of these failures indicate 
the ongoing Policy calculations, eg:
{code}
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
  1) Thread[id=21406, name=AutoscalingActionExecutor-7277-thread-1, 
state=RUNNABLE, group=TGRP-TestLargeCluster]
   at java.util.ArrayList.iterator(ArrayList.java:834)
   at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:131)
   at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:110)
   at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
   at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108)
   at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
   at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74)
   at org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91)
   at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
   at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$466/1757323495.apply(Unknown
 Source)
   at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
   at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
   at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
   at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
   at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
   at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:298)
   at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:287)
   at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.removeReplica(Row.java:156)
   at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:60)
   at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
   at 
org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
   at 
org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
   at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
   at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$439/951218654.run(Unknown
 Source)
   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
   at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1677458082.run(Unknown
 Source)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([C6FA0364D13DAFCC]:0)
{code}
It's possible that somewhere an InterruptedException is caught and not 
propagated so that the Policy calculations don't terminate when the thread is 
interrupted when closing parent components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1721 - Failure!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1721/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
cleanup action didn't run

Stack Trace:
java.lang.AssertionError: cleanup action didn't run
at 
__randomizedtesting.SeedInfo.seed([60263439C2BC8D2A:7D0AF44BA3FFAA21]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup(ScheduledMaintenanceTriggerTest.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14181 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
   [junit4]   2> 3308558 INFO  

[jira] [Assigned] (SOLR-12071) PULL replica cores initialisation fails when Cdcr enabled

2018-03-09 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-12071:


Assignee: Varun Thacker

> PULL replica cores initialisation fails when Cdcr enabled
> -
>
> Key: SOLR-12071
> URL: https://issues.apache.org/jira/browse/SOLR-12071
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12071.patch
>
>
> {{CdcrUpdateLog}} never gets picked up for PULL type replicas and hence the 
> core initialisation fails when a collection is CDCR enabled, obviously it 
> can't be a leader, but followers.
> {code}
>[junit4]   2> 47345 INFO  (qtp1256767285-28) [n:127.0.0.1:50646_solr 
> c:cdcr-cluster2 s:shard1 r:core_node4 x:cdcr-cluster2_shard1_replica_p2] 
> o.a.s.m.SolrMetricManager Closing metric reporters for 
> registry=solr.collection.cdcr-cluster2.shard1.leader, tag=895903268
>[junit4]   2> 47353 INFO  
> (searcherExecutor-32-thread-1-processing-n:127.0.0.1:50646_solr 
> x:cdcr-cluster2_shard1_replica_t1 s:shard1 c:cdcr-cluster2 r:core_node3) 
> [n:127.0.0.1:50646_solr c:cdcr-cluster2 s:shard1 r:core_node3 
> x:cdcr-cluster2_shard1_replica_t1] o.a.s.c.SolrCore 
> [cdcr-cluster2_shard1_replica_t1] Registered new searcher 
> Searcher@638c50cd[cdcr-cluster2_shard1_replica_t1] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader())}
>[junit4]   2> 47353 ERROR (qtp1256767285-28) [n:127.0.0.1:50646_solr 
> c:cdcr-cluster2 s:shard1 r:core_node4 x:cdcr-cluster2_shard1_replica_p2] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> CREATEing SolrCore 'cdcr-cluster2_shard1_replica_p2': Unable to create core 
> [cdcr-cluster2_shard1_replica_p2] Caused by: Solr instance is not configured 
> with the cdcr update log.
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:993)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:90)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:358)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_162) - Build # 1498 - Still Failing!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1498/
Java: 64bit/jdk1.8.0_162 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 60836 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj2091544621
 [ecj-lint] Compiling 874 source files to /tmp/ecj2091544621
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 23)
 [ecj-lint] import java.net.URL;
 [ecj-lint]
 [ecj-lint] The import java.net.URL is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestCloudConsistency.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 3 problems (2 errors, 1 warning)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:618: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build.xml:682: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2088: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2121: 
Compile failed; see the compiler error output for details.

Total time: 74 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12063) Fix tlog entry indexes in UpdateLog for CDCR to work smoothly.

2018-03-09 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12063:

Attachment: SOLR-12063.patch

> Fix tlog entry indexes in UpdateLog for CDCR to work smoothly.
> --
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> In *UpdateLog*, {{RecentUpdates}} reads the entry of tlogs, and throughout 
> the project the entry indexes for various operations are consistent, but odd 
> in this part. As we included new entry in TransactionLog for CDCR, read 
> operations in {{update()}} method of {{RecentUpdates}} throw error rightfully 
> as elements are read from wrong indexes of tlog entry. The entry indexes of 
> llog should be consistent throughout.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 226 - Unstable

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/226/

[...truncated 35 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2409/consoleText

[repro] Revision: 7dfb04ee5e9f973fbad20c529ec091c201743398

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.seed=55C4D64435531959 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=hr-HR -Dtests.timezone=America/Indiana/Vincennes 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
b7b638e00b70d3fe6d4ebcbb9bf3fe3c064209b1
[repro] git fetch
[repro] git checkout 7dfb04ee5e9f973fbad20c529ec091c201743398

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestLargeCluster
[repro] ant compile-test

[...truncated 3292 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLargeCluster" -Dtests.showOutput=onerror  
-Dtests.seed=55C4D64435531959 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=hr-HR -Dtests.timezone=America/Indiana/Vincennes 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 8819 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro] git checkout b7b638e00b70d3fe6d4ebcbb9bf3fe3c064209b1

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12063) Fix tlog entry indexes in UpdateLog for CDCR to work smoothly.

2018-03-09 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393422#comment-16393422
 ] 

Varun Thacker commented on SOLR-12063:
--

Somewhere in the test case we are printing all the data in zookeeper. This 
happens multiple times and fills the console. Can we figure out where is this 
happening and fix it?

 

If I take the latest patch , revert the fix to the UpdateLog and run 
CdcrRequestHandlerTest it always succeeds.  Can we reproduce this problem with 
a test case so that the fix can be validated?

> Fix tlog entry indexes in UpdateLog for CDCR to work smoothly.
> --
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> In *UpdateLog*, {{RecentUpdates}} reads the entry of tlogs, and throughout 
> the project the entry indexes for various operations are consistent, but odd 
> in this part. As we included new entry in TransactionLog for CDCR, read 
> operations in {{update()}} method of {{RecentUpdates}} throw error rightfully 
> as elements are read from wrong indexes of tlog entry. The entry indexes of 
> llog should be consistent throughout.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2410 - Failure

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2410/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestLargeCluster

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 1) Thread[id=21406, 
name=AutoscalingActionExecutor-7277-thread-1, state=RUNNABLE, 
group=TGRP-TestLargeCluster] at 
java.util.ArrayList.iterator(ArrayList.java:834) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:131) at 
org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:110) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92) at 
org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74) at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91) at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$466/1757323495.apply(Unknown
 Source) at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)   
  at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)  
   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
 at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)  
   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)   
  at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:298)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:287)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.removeReplica(Row.java:156)  
   at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:60)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
 at 
org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$439/951218654.run(Unknown
 Source) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/1677458082.run(Unknown
 Source) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
   1) Thread[id=21406, name=AutoscalingActionExecutor-7277-thread-1, 
state=RUNNABLE, group=TGRP-TestLargeCluster]
at java.util.ArrayList.iterator(ArrayList.java:834)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:131)
at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:110)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74)
at org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$466/1757323495.apply(Unknown
 Source)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at 

[jira] [Updated] (SOLR-11629) CloudSolrClient.Builder should accept a zk host

2018-03-09 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-11629:
---
Attachment: SOLR-11629.patch

> CloudSolrClient.Builder should accept a zk host
> ---
>
> Key: SOLR-11629
> URL: https://issues.apache.org/jira/browse/SOLR-11629
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-11629.patch, SOLR-11629.patch, SOLR-11629.patch, 
> SOLR-11629.patch, SOLR-11629.patch
>
>
> Today we need to create an empty builder and then wither pass zkHost or 
> withSolrUrl
> {code}
> SolrClient solrClient = new 
> CloudSolrClient.Builder().withZkHost("localhost:9983").build();
> solrClient.request(updateRequest, "gettingstarted");
> {code}
> What if we have two constructors , one that accepts a zkHost and one that 
> accepts a SolrUrl .
> The advantages that I can think of are:
> - It will be obvious to users that we support two mechanisms of creating a 
> CloudSolrClient . The SolrUrl option is cool and applications don't need to 
> know about ZooKeeper and new users will learn about this . Maybe our 
> example's on the ref guide should use this? 
> - Today people can set both zkHost and solrUrl  but CloudSolrClient can only 
> utilize one of them
> HttpClient's Builder accepts the host 
> {code}
> HttpSolrClient client = new 
> HttpSolrClient.Builder("http://localhost:8983/solr;).build();
> client.request(updateRequest, "techproducts");
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8197) Make top-k queries fast when static scoring signals are incorporated into the score

2018-03-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393358#comment-16393358
 ] 

Robert Muir commented on LUCENE-8197:
-

I'm confused about the first method, why wouldn't it simply take 
{{featureName}} as an argument and use actual term statistics from the index?

> Make top-k queries fast when static scoring signals are incorporated into the 
> score
> ---
>
> Key: LUCENE-8197
> URL: https://issues.apache.org/jira/browse/LUCENE-8197
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8197.patch, LUCENE-8197.patch, LUCENE-8197.patch
>
>
> Block-max WAND (LUCENE-8135) and some earlier issues made Lucene faster at 
> computing the top-k matches of boolean queries.
> It is quite frequent that users want to improve ranking and end up scoring 
> with a formula that could look like {{bm25_score + w * log(alpha + 
> pagerank)}} (w and alpha being constants, and pagerank being a per-document 
> field value). You could do this with doc values and {{FunctionScoreQuery}} 
> but unfortunately this will remove the ability to optimize top-k queries 
> since the scoring formula becomes opaque to Lucene.
> I'd like to add a new field that allows to store such scoring signals as term 
> frequencies, and new queries that could produce {{log(alpha + pagerank)}} as 
> a score. Then implementing the above formula can be done by boosting this 
> query with a boost equal to {{w}} and adding this boosted query as a SHOULD 
> clause of a {{BooleanQuery}}. This would give Lucene the ability to compute 
> top-k hits faster, especially but not only if the index is sorted by 
> decreasing pagerank.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 496 - Unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/496/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testGammaDistribution

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([578EDB82A4B28CD8:6AF4F02C87CA26CF]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testGammaDistribution(StreamExpressionTest.java:8649)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 15671 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.StreamExpressionTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21604 - Failure!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21604/
Java: 32bit/jdk1.8.0_162 -client -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestLargeCluster

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 1) Thread[id=9903, 
name=AutoscalingActionExecutor-3208-thread-1, state=RUNNABLE, 
group=TGRP-TestLargeCluster] at 
java.util.HashMap.putVal(HashMap.java:629) at 
java.util.HashMap.put(HashMap.java:612) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74) at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91) at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$587/8877574.apply(Unknown
 Source) at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)   
  at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)  
   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
 at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)  
   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)   
  at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:298)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:287)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.addReplica(Row.java:122) 
at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:59)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
 at 
org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$544/18045861.run(Unknown
 Source) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$10/9333754.run(Unknown
 Source) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
   1) Thread[id=9903, name=AutoscalingActionExecutor-3208-thread-1, 
state=RUNNABLE, group=TGRP-TestLargeCluster]
at java.util.HashMap.putVal(HashMap.java:629)
at java.util.HashMap.put(HashMap.java:612)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74)
at org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$587/8877574.apply(Unknown
 Source)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:298)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:287)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.addReplica(Row.java:122)
at 

[GitHub] lucene-solr pull request #323: SOLR-11731: LatLonPointSpatialField could be ...

2018-03-09 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/323#discussion_r173525391
  
--- Diff: 
solr/core/src/java/org/apache/solr/schema/LatLonPointSpatialField.java ---
@@ -75,8 +77,16 @@ protected SpatialStrategy newSpatialStrategy(String 
fieldName) {
 return new LatLonPointSpatialStrategy(ctx, fieldName, 
schemaField.indexed(), schemaField.hasDocValues());
   }
   
-  public String geoValueToStringValue(long value) {
-return new String(decodeLatitudeCeil(value) + "," + 
decodeLongitudeCeil(value));
+  /**
+   * Converts to "lat, lon"
+   * @param value Non-null; stored location field data
+   * @return Non-null; "lat, lon" with 6 decimal point precision
--- End diff --

Why 6 decimal points?  Is that sufficient to represent the data to as much 
precision as is decoded?  Perhaps instead of putting the constant '6' in the 
code, it should be calculated so that we can see how 6 is arrived at.  What 
does that translate to in the metric system?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #323: SOLR-11731: LatLonPointSpatialField could be ...

2018-03-09 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/323#discussion_r173524870
  
--- Diff: 
solr/core/src/java/org/apache/solr/search/SolrDocumentFetcher.java ---
@@ -486,16 +486,14 @@ private Object decodeDVField(int localId, LeafReader 
leafReader, String fieldNam
   case SORTED_NUMERIC:
 final SortedNumericDocValues numericDv = 
leafReader.getSortedNumericDocValues(fieldName);
 if (numericDv != null && numericDv.advance(localId) == localId) {
-  if (schemaField.getType() instanceof LatLonPointSpatialField) {
-long number = numericDv.nextValue();
-return ((LatLonPointSpatialField) 
schemaField.getType()).geoValueToStringValue(number);
-  }
   final List outValues = new 
ArrayList<>(numericDv.docValueCount());
   for (int i = 0; i < numericDv.docValueCount(); i++) {
 long number = numericDv.nextValue();
 Object value = decodeNumberFromDV(schemaField, number, true);
 // return immediately if the number is not decodable, hence 
won't return an empty list.
 if (value == null) return null;
+// return the value as "lat, lon" if its not multi-valued
--- End diff --

This is not consistent with how Solr normally does things.  If the field 
type is declared as multiValued then we normally always return as a list; 
otherwise we never do.  Here I think you're making that vary per document 
dependent on how many values the document has.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #323: SOLR-11731: LatLonPointSpatialField could be ...

2018-03-09 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/323#discussion_r173526922
  
--- Diff: solr/core/src/test/org/apache/solr/search/TestSolr4Spatial2.java 
---
@@ -120,21 +120,27 @@ public void testRptWithGeometryGeo3dField() throws 
Exception {
   
   @Test
   public void testLatLonRetrieval() throws Exception {
-assertU(adoc("id", "0",
-"llp_1_dv_st", "-75,41",
-"llp_1_dv", "-80.0,20.0",
-"llp_1_dv_dvasst", "40.299599,-74.082728"));
+assertU(adoc("id", "0", "llp_1_dv_st", "-75,41")); // stored
--- End diff --

Please also test -90 and +90, -180 and +180.  This will help ensure there 
aren't edge cases (literally) in the DV decoding logic.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #323: SOLR-11731: LatLonPointSpatialField could be ...

2018-03-09 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/323#discussion_r173526505
  
--- Diff: 
solr/core/src/java/org/apache/solr/schema/LatLonPointSpatialField.java ---
@@ -75,8 +77,16 @@ protected SpatialStrategy newSpatialStrategy(String 
fieldName) {
 return new LatLonPointSpatialStrategy(ctx, fieldName, 
schemaField.indexed(), schemaField.hasDocValues());
   }
   
-  public String geoValueToStringValue(long value) {
-return new String(decodeLatitudeCeil(value) + "," + 
decodeLongitudeCeil(value));
+  /**
+   * Converts to "lat, lon"
+   * @param value Non-null; stored location field data
+   * @return Non-null; "lat, lon" with 6 decimal point precision
+   */
+  public static String decodeDocValueToString(long value) {
+double latitudeDecoded = BigDecimal.valueOf(decodeLatitude((int) 
(value >> 32))).setScale(6, HALF_UP).doubleValue();
--- End diff --

Lets have some comments explaining why this algorithm here is what it is.  
Why HALF_UP?

Can we skip the doubleValue and just do toPlainString (avoiding exponent 
notation of toString) since we're composing a string in the end?  In other 
words, lets avoid the pointless double primitive intermediary.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-03-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393194#comment-16393194
 ] 

Erick Erickson edited comment on SOLR-7887 at 3/9/18 6:11 PM:
--

bq: The size of the patches were in the 190kb

Yeah, that puzzled me too. "All tests pass". I took a look and the earlier 
patch has things like :

19K Mar  9 10:03 audience-annotations-LICENSE-ASL.txt.

These are part of Yetus, so I'm guessing your patch has some cruft in it 
unrelated to log4j2. I took the yetus dependencies out as per Tomás' comments.


was (Author: erickerickson):
bq: The size of the patches were in the 190kb

Yeah, that puzzled me too. "All tests pass", but I'll take a look to see if I 
can identify why the new one is smaller.

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887-eoe-review.patch, 
> SOLR-7887-eoe-review.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 9 - Failure

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/9/

5 tests failed.
FAILED:  
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest.testMultipleThreads

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([4EBC20BAFFC6E5D3:628E573773029357]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:904)
at 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest.testMultipleThreads(AtomicUpdateProcessorFactoryTest.java:260)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=int_i:18=xml
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:897)
... 40 more


FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory


[jira] [Commented] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393282#comment-16393282
 ] 

Tomás Fernández Löbbe commented on SOLR-7887:
-

If you upload the new patch to the old code review it should allow people to 
easily view changes between the versions

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887-eoe-review.patch, 
> SOLR-7887-eoe-review.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393271#comment-16393271
 ] 

Tomás Fernández Löbbe commented on SOLR-11982:
--

I think it should be a single parameter that can address the different types of 
preference, not different parameters one for each type. In general I like the 
syntax discussed here so far (I have that concern of the pipe I mentioned 
before). I also feel it falls into the "shards" family of parameters. 
{{shards}} determines the replicas to which the distributed query should be 
sent, {{shards.info}} tells users which replicas responded a distributed query, 
etc. as I said before, I think it is close to what's discussed in SOLR-10880 
and naming should be consistent. I'm ok with changing the word "sort" if you 
think that can confuse users. Do you think something like {{shards.preference}} 
would be better [~elyograg]?
{quote} I don't think it needs to since the client, whatever it is, has full 
control over where to send the query.
{quote}
Unless they are using CloudSolrClient... but yes, in that case users can choose 
not to use it. In any case, I think that can be left to another Jira. We should 
include a big note in the docs though.

> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3, master (8.0)
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Attachments: SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2018-03-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393194#comment-16393194
 ] 

Erick Erickson commented on SOLR-7887:
--

bq: The size of the patches were in the 190kb

Yeah, that puzzled me too. "All tests pass", but I'll take a look to see if I 
can identify why the new one is smaller.

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-7887-WIP.patch, SOLR-7887-eoe-review.patch, 
> SOLR-7887-eoe-review.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, SOLR-7887.patch, 
> SOLR-7887.patch, SOLR-7887.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-03-09 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393188#comment-16393188
 ] 

Jim Ferenczi commented on LUCENE-8196:
--

{quote}
I'd rather keep the API as it is, with the field being passed to IntervalQuery 
and then recursing down the IntervalSource tree.  Otherwise you end up having 
to declare the field on all the created sources, which seems redundant.  I've 
removed the cross-field hack entirely for the moment.
{quote}

+1 to remove the cross-field hack, thanks. Regarding the API it's ok since 
IntervalQuery limits all sources to one field so I am fine with that (I 
misunderstood how the IntervalQuery can be used).

> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-03-09 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393182#comment-16393182
 ] 

Alan Woodward commented on LUCENE-8196:
---

I discussed scoring with [~jim.ferenczi] and [~jpountz] offline, and we decided 
to just use the inverse length of intervals as a sloppy frequency for now, as 
described in the Vigna paper linked above.  This means that we can't compare 
scores directly with existing phrase queries, but the query mechanism is quite 
different (particularly for SloppyPhraseScorer) so it makes sense that scores 
won't be the same either.

> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_162) - Build # 1497 - Failure!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1497/
Java: 64bit/jdk1.8.0_162 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest

Error Message:
Collection not found: basicTest

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: basicTest
at 
__randomizedtesting.SeedInfo.seed([43F33248D5AF2533:B107252A910A2800]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest(LeaderVoteWaitTimeoutTest.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Lucene/Solr 7.3

2018-03-09 Thread Alan Woodward
FYI I’m still recovering from my travels, so I’m going to create the release 
branch on Monday instead.

> On 27 Feb 2018, at 18:51, Cassandra Targett  > wrote:
> 
> I intend to create the Ref Guide RC as soon as the Lucene/Solr artifacts RC 
> is ready, so this is a great time to remind folks that if you've got Ref 
> Guide changes to be done, you've got a couple weeks. If you're stuck or not 
> sure what to do, let me know & I'm happy to help you out.
> 
> Eventually we'd like to release both the Ref Guide and Lucene/Solr with the 
> same release process, so this will be a big first test to see how ready for 
> that we are.
> 
> On Tue, Feb 27, 2018 at 11:42 AM, Michael McCandless 
> > wrote:
> +1
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com 
> 
> On Fri, Feb 23, 2018 at 4:50 AM, Alan Woodward 
>  > wrote:
> Hi all,
> 
> It’s been a couple of months since the 7.2 release, and we’ve accumulated 
> some nice new features since then.  I’d like to volunteer to be RM for a 7.3 
> release.
> 
> I’m travelling for the next couple of weeks, so I would plan to create the 
> release branch two weeks today, on the 9th March (unless anybody else wants 
> to do it sooner, of course :)
> 
> - Alan
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
> 
> 
> 
> 



[jira] [Updated] (SOLR-11731) LatLonPointSpatialField could be improved to return the lat/lon from docValues

2018-03-09 Thread Karthik Ramachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Ramachandran updated SOLR-11731:

Attachment: SOLR-11731.patch

> LatLonPointSpatialField could be improved to return the lat/lon from docValues
> --
>
> Key: SOLR-11731
> URL: https://issues.apache.org/jira/browse/SOLR-11731
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Priority: Minor
> Attachments: SOLR-11731.patch, SOLR-11731.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> You can only return the lat & lon from a LatLonPointSpatialField if you set 
> stored=true.  But we could allow this (albeit at a small loss in precision) 
> if stored=false and docValues=true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #323: SOLR-11731: LatLonPointSpatialField could be improve...

2018-03-09 Thread mrkarthik
Github user mrkarthik commented on the issue:

https://github.com/apache/lucene-solr/pull/323
  
@dsmiley Updated the PR based on the comments.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8197) Make top-k queries fast when static scoring signals are incorporated into the score

2018-03-09 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393078#comment-16393078
 ] 

Adrien Grand commented on LUCENE-8197:
--

Thank Robert and Dawid for having a look, I folded feedback in:
 - Fixed brackets. I've used ]a,b[ for my entire scholarship to mean open 
intervals but that seems to be only a thing in France and I'm not fully 
converted to parenthesis yet. :)
 - I renamed arguments in javadocs, but not in formulas to keep explanations 
easy to read: a-> scalingFactor for the log function, k -> pivot for the satu 
and sigm functions and a -> exp for the sigm function. If you can think of 
better names, I'm open to suggestions.
 - Added javadocs for these params
 - Made the explanation break down.
 - Removed Class.hashCode() usage

I've also been exploring the idea of making it easier to use and added two 
utility methods:
 - One takes a cutover document frequency and computes an approximation of the 
IDF for terms that have such a frequency in field that is searched. This allows 
to do something like: if the query term is very specific (freq << x) then the 
query-dependent score should dominate the final score as it matters more, but 
on the other hand if the query term is very general (freq >> x) then the 
feature score should dominate.
 - Another one computes the geometric mean (which is the only metric we can 
compute with index stats) of the indexed features for usage as the pivot value 
in the satu function. I expect it to be good enough to get started.

Both these utility methods combined mean that you can start using the satu 
function in a way that shouldn't be too wrong. I added an example how to do it 
in the class-level javadocs.

> Make top-k queries fast when static scoring signals are incorporated into the 
> score
> ---
>
> Key: LUCENE-8197
> URL: https://issues.apache.org/jira/browse/LUCENE-8197
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8197.patch, LUCENE-8197.patch, LUCENE-8197.patch
>
>
> Block-max WAND (LUCENE-8135) and some earlier issues made Lucene faster at 
> computing the top-k matches of boolean queries.
> It is quite frequent that users want to improve ranking and end up scoring 
> with a formula that could look like {{bm25_score + w * log(alpha + 
> pagerank)}} (w and alpha being constants, and pagerank being a per-document 
> field value). You could do this with doc values and {{FunctionScoreQuery}} 
> but unfortunately this will remove the ability to optimize top-k queries 
> since the scoring formula becomes opaque to Lucene.
> I'd like to add a new field that allows to store such scoring signals as term 
> frequencies, and new queries that could produce {{log(alpha + pagerank)}} as 
> a score. Then implementing the above formula can be done by boosting this 
> query with a boost equal to {{w}} and adding this boosted query as a SHOULD 
> clause of a {{BooleanQuery}}. This would give Lucene the ability to compute 
> top-k hits faster, especially but not only if the index is sorted by 
> decreasing pagerank.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8197) Make top-k queries fast when static scoring signals are incorporated into the score

2018-03-09 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8197:
-
Attachment: LUCENE-8197.patch

> Make top-k queries fast when static scoring signals are incorporated into the 
> score
> ---
>
> Key: LUCENE-8197
> URL: https://issues.apache.org/jira/browse/LUCENE-8197
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8197.patch, LUCENE-8197.patch, LUCENE-8197.patch
>
>
> Block-max WAND (LUCENE-8135) and some earlier issues made Lucene faster at 
> computing the top-k matches of boolean queries.
> It is quite frequent that users want to improve ranking and end up scoring 
> with a formula that could look like {{bm25_score + w * log(alpha + 
> pagerank)}} (w and alpha being constants, and pagerank being a per-document 
> field value). You could do this with doc values and {{FunctionScoreQuery}} 
> but unfortunately this will remove the ability to optimize top-k queries 
> since the scoring formula becomes opaque to Lucene.
> I'd like to add a new field that allows to store such scoring signals as term 
> frequencies, and new queries that could produce {{log(alpha + pagerank)}} as 
> a score. Then implementing the above formula can be done by boosting this 
> query with a boost equal to {{w}} and adding this boosted query as a SHOULD 
> clause of a {{BooleanQuery}}. This would give Lucene the ability to compute 
> top-k hits faster, especially but not only if the index is sorted by 
> decreasing pagerank.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8197) Make top-k queries fast when static scoring signals are incorporated into the score

2018-03-09 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393059#comment-16393059
 ] 

Dawid Weiss commented on LUCENE-8197:
-

We could. Sometimes it is useful, but most of the time I think it's an 
accidental mistake (well... semi-mistake).

> Make top-k queries fast when static scoring signals are incorporated into the 
> score
> ---
>
> Key: LUCENE-8197
> URL: https://issues.apache.org/jira/browse/LUCENE-8197
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8197.patch, LUCENE-8197.patch
>
>
> Block-max WAND (LUCENE-8135) and some earlier issues made Lucene faster at 
> computing the top-k matches of boolean queries.
> It is quite frequent that users want to improve ranking and end up scoring 
> with a formula that could look like {{bm25_score + w * log(alpha + 
> pagerank)}} (w and alpha being constants, and pagerank being a per-document 
> field value). You could do this with doc values and {{FunctionScoreQuery}} 
> but unfortunately this will remove the ability to optimize top-k queries 
> since the scoring formula becomes opaque to Lucene.
> I'd like to add a new field that allows to store such scoring signals as term 
> frequencies, and new queries that could produce {{log(alpha + pagerank)}} as 
> a score. Then implementing the above formula can be done by boosting this 
> query with a boost equal to {{w}} and adding this boosted query as a SHOULD 
> clause of a {{BooleanQuery}}. This would give Lucene the ability to compute 
> top-k hits faster, especially but not only if the index is sorted by 
> decreasing pagerank.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8197) Make top-k queries fast when static scoring signals are incorporated into the score

2018-03-09 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393050#comment-16393050
 ] 

David Smiley commented on LUCENE-8197:
--

bq.  If there are any associative containers those objects are stored in then 
this will cause non-repeatable ordering in those containers from run to run 
(Class.hashCode just goes up to Object.hashCode).

Wow -- yeah nice observation.  Maybe we should make Class.hashCode on the 
forbidden APIs?

> Make top-k queries fast when static scoring signals are incorporated into the 
> score
> ---
>
> Key: LUCENE-8197
> URL: https://issues.apache.org/jira/browse/LUCENE-8197
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8197.patch, LUCENE-8197.patch
>
>
> Block-max WAND (LUCENE-8135) and some earlier issues made Lucene faster at 
> computing the top-k matches of boolean queries.
> It is quite frequent that users want to improve ranking and end up scoring 
> with a formula that could look like {{bm25_score + w * log(alpha + 
> pagerank)}} (w and alpha being constants, and pagerank being a per-document 
> field value). You could do this with doc values and {{FunctionScoreQuery}} 
> but unfortunately this will remove the ability to optimize top-k queries 
> since the scoring formula becomes opaque to Lucene.
> I'd like to add a new field that allows to store such scoring signals as term 
> frequencies, and new queries that could produce {{log(alpha + pagerank)}} as 
> a score. Then implementing the above formula can be done by boosting this 
> query with a boost equal to {{w}} and adding this boosted query as a SHOULD 
> clause of a {{BooleanQuery}}. This would give Lucene the ability to compute 
> top-k hits faster, especially but not only if the index is sorted by 
> decreasing pagerank.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 493 - Still Unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/493/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

7 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestRAFDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_DEEBF309BEECF80A-001\testThreadSafety-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_DEEBF309BEECF80A-001\testThreadSafety-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_DEEBF309BEECF80A-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_DEEBF309BEECF80A-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_DEEBF309BEECF80A-001\testThreadSafety-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_DEEBF309BEECF80A-001\testThreadSafety-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_DEEBF309BEECF80A-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_DEEBF309BEECF80A-001

at __randomizedtesting.SeedInfo.seed([DEEBF309BEECF80A]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestZkChroot

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_26A2CE7EDFB2D03C-001\tempDir-003\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_26A2CE7EDFB2D03C-001\tempDir-003\collection1

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_26A2CE7EDFB2D03C-001\tempDir-003:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_26A2CE7EDFB2D03C-001\tempDir-003

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_26A2CE7EDFB2D03C-001\tempDir-003\collection1\conf:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_26A2CE7EDFB2D03C-001\tempDir-003\collection1\conf
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_26A2CE7EDFB2D03C-001\tempDir-003\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_26A2CE7EDFB2D03C-001\tempDir-003\collection1
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_26A2CE7EDFB2D03C-001\tempDir-003:
 java.nio.file.DirectoryNotEmptyException: 

[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-03-09 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392931#comment-16392931
 ] 

Shawn Heisey commented on SOLR-11982:
-

bq. I think shards.sort is pretty consistent with the rest of the Solr 
parameters. facet.sort sorts facets, group.sort sorts groups

Those sort parameters affect the order of information in search results.  The 
one discussed here is unlikely in most situations to have any effect at all on 
the order of results.

If you're determined to use a sort parameter, I won't stand in your way.  I've 
said why I don't like it, and you're free to say that my worries are unfounded. 
 You might want to consider replica.sort instead of shards.sort, since it's 
actually replicas that are being sorted.

One more thing, I should create a new issue for this: We have a lot of 
inconsistency on multi-word parameter names.  Some of them have the parts 
separated by periods, some of them use camelCase.  We really should standardize 
on one style, and remove old parameters in 8.0.  I'm leaning towards the period 
separator, simply because if we can make parameter names case-insensitive, 
users are less likely to type parameters incorrectly.


> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3, master (8.0)
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Attachments: SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7211 - Failure!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7211/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestBoolean2

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001\tempDir-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001

at __randomizedtesting.SeedInfo.seed([B7B1F66EB9785AE1]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestFileSwitchDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_B7B1F66EB9785AE1-001\bar-020:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_B7B1F66EB9785AE1-001\bar-020

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_B7B1F66EB9785AE1-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_B7B1F66EB9785AE1-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_B7B1F66EB9785AE1-001\bar-020:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_B7B1F66EB9785AE1-001\bar-020
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_B7B1F66EB9785AE1-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_B7B1F66EB9785AE1-001

at __randomizedtesting.SeedInfo.seed([B7B1F66EB9785AE1]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 

[jira] [Commented] (SOLR-10512) Innerjoin streaming expressions - Invalid JoinStream error

2018-03-09 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392851#comment-16392851
 ] 

Dennis Gove commented on SOLR-10512:


It was certainly designed such that the left field in the on clause is the 
field from the first incoming stream and the right field in the on clause is 
the field from the second incoming stream. If that is not occurring then this 
is a very clear bug.

> Innerjoin streaming expressions - Invalid JoinStream error
> --
>
> Key: SOLR-10512
> URL: https://issues.apache.org/jira/browse/SOLR-10512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.4.2, 6.5
> Environment: Debian Jessie
>Reporter: Dominique Béjean
>Priority: Major
>
> It looks like innerJoin streaming expression do not work as explained in 
> documentation. An invalid JoinStream error occurs.
> {noformat}
> curl --data-urlencode 'expr=innerJoin(
> search(books, 
>q="*:*", 
>fl="id", 
>sort="id asc"),
> searchreviews, 
>q="*:*", 
>fl="id_book_s", 
>sort="id_book_s asc"), 
> on="id=id_books_s"
> )' http://localhost:8983/solr/books/stream
>   
> {"result-set":{"docs":[{"EXCEPTION":"Invalid JoinStream - all incoming stream 
> comparators (sort) must be a superset of this stream's 
> equalitor.","EOF":true}]}}   
> {noformat}
> It is tottaly similar to the documentation example
> 
> {noformat}
> innerJoin(
>   search(people, q=*:*, fl="personId,name", sort="personId asc"),
>   search(pets, q=type:cat, fl="ownerId,petName", sort="ownerId asc"),
>   on="personId=ownerId"
> )
> {noformat}
> Queries on each collection give :
> {noformat}
> $ curl --data-urlencode 'expr=search(books, 
>q="*:*", 
>fl="id, title_s, pubyear_i", 
>sort="pubyear_i asc", 
>qt="/export")' 
> http://localhost:8983/solr/books/stream
> {
>   "result-set": {
> "docs": [
>   {
> "title_s": "Friends",
> "pubyear_i": 1994,
> "id": "book2"
>   },
>   {
> "title_s": "The Way of Kings",
> "pubyear_i": 2010,
> "id": "book1"
>   },
>   {
> "EOF": true,
> "RESPONSE_TIME": 16
>   }
> ]
>   }
> }
> $ curl --data-urlencode 'expr=search(reviews, 
>q="author_s:d*", 
>fl="id, id_book_s, stars_i, review_dt", 
>sort="id_book_s asc", 
>qt="/export")' 
> http://localhost:8983/solr/reviews/stream
>  
> {
>   "result-set": {
> "docs": [
>   {
> "stars_i": 3,
> "id": "book1_c2",
> "id_book_s": "book1",
> "review_dt": "2014-03-15T12:00:00Z"
>   },
>   {
> "stars_i": 4,
> "id": "book1_c3",
> "id_book_s": "book1",
> "review_dt": "2014-12-15T12:00:00Z"
>   },
>   {
> "stars_i": 3,
> "id": "book2_c2",
> "id_book_s": "book2",
> "review_dt": "1994-03-15T12:00:00Z"
>   },
>   {
> "stars_i": 4,
> "id": "book2_c3",
> "id_book_s": "book2",
> "review_dt": "1994-12-15T12:00:00Z"
>   },
>   {
> "EOF": true,
> "RESPONSE_TIME": 47
>   }
> ]
>   }
> }
> {noformat}
> After more tests, I just had to invert the "on" clause to make it work
> {noformat}
> curl --data-urlencode 'expr=innerJoin(
> search(books, 
>q="*:*", 
>fl="id", 
>sort="id asc"),
> searchreviews, 
>q="*:*", 
>fl="id_book_s", 
>sort="id_book_s asc"), 
> on="id_books_s=id"
> )' http://localhost:8983/solr/books/stream
> 
> {
>   "result-set": {
> "docs": [
>   {
> "title_s": "The Way of Kings",
> "pubyear_i": 2010,
> "stars_i": 5,
> "id": "book1",
> 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21602 - Still Unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21602/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.cloud.TestPullReplica.testKillLeader

Error Message:
Replica core_node4 not up to date after 10 seconds expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Replica core_node4 not up to date after 10 seconds 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([9DE3A4CED5A98A68:D4F5507AB7121E3E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:538)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:529)
at 
org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:399)
at 
org.apache.solr.cloud.TestPullReplica.testKillLeader(TestPullReplica.java:305)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-03-09 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392834#comment-16392834
 ] 

Alan Woodward commented on LUCENE-8196:
---

I opened a pull request at [https://github.com/apache/lucene-solr/pull/334] to 
make this easier to review.  [~jpountz] I think I've addressed most of your 
feedback?

[~jim.ferenczi] I'd rather keep the API as it is, with the field being passed 
to IntervalQuery and then recursing down the IntervalSource tree.  Otherwise 
you end up having to declare the field on all the created sources, which seems 
redundant.  I've removed the cross-field hack entirely for the moment.

I'll see if I can improve the scoring next.

> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12031) Refactor Policy framework to let state changes to be applied to all nodes

2018-03-09 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-12031.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.3

I believe this one is fixed.

> Refactor Policy framework to let state changes to be applied to all nodes
> -
>
> Key: SOLR-12031
> URL: https://issues.apache.org/jira/browse/SOLR-12031
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12031.patch
>
>
> The framework assumes that all variables will change the values in the same 
> node only. that doesn't have to be the case.
>  
> for instance  , when a replica for a given shard is a added to a node, it 
> actually increases the search rate in that node and decrease the search rate 
> on other nodes that host the same shard.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #334: LUCENE-8196

2018-03-09 Thread romseygeek
GitHub user romseygeek opened a pull request:

https://github.com/apache/lucene-solr/pull/334

LUCENE-8196



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/romseygeek/lucene-solr positions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/334.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #334


commit 3ddd092d63d0b7a6c7ed3be189fa9e8c76fa8196
Author: Alan Woodward 
Date:   2017-12-18T14:42:48Z

WIP: terms and ordered near

commit d73b199deaa1ac279553981ec09d107300ac693a
Author: Alan Woodward 
Date:   2018-02-20T14:25:51Z

WIP

commit 6938bbc7178ef542f443ba52309834cd9814ab14
Author: Alan Woodward 
Date:   2018-02-21T14:09:02Z

Move intervals() back to Scorer

Having it on Weight means duplicating loads of Scorer implementations to
ensure that we always return the correct positions

commit e169ffc64a3d1ca3c506b6a86ec45861cf02609e
Author: Alan Woodward 
Date:   2018-02-21T16:34:41Z

Add unorderedNearQuery

commit fc2d0bb65a51d58334d031ed2858245bc3a609b1
Author: Alan Woodward 
Date:   2018-02-21T18:08:50Z

test for more complex queries

commit adc63477ac535b86069572a7561c97596d084bb6
Author: Alan Woodward 
Date:   2018-02-22T10:13:52Z

Use ScoreMode to pass postings flags, add scoring to IntervalQuery

commit 069eee1e71bfe10f84dbb119cd43da9f12fa8d57
Author: Alan Woodward 
Date:   2018-02-22T10:14:40Z

Merge branch 'master' into positions

commit 4e7d5ba1bd74d54f8b29045f390af8257a84d574
Author: Alan Woodward 
Date:   2018-02-22T10:22:43Z

cleanup

commit 66abdd68dd68b5e71010a12b097a7a63961317f0
Author: Alan Woodward 
Date:   2018-02-22T11:52:56Z

Test scoring + fix compared with phrase query

commit 258e5e524a3b3ab760a3c5b10c1fda17b5b7056b
Author: Alan Woodward 
Date:   2018-02-23T14:06:52Z

Add some difference intervals

commit 07c1f24843718f0b230eb11bb474063815692f83
Author: Alan Woodward 
Date:   2018-02-23T15:45:10Z

difference -> non_overlapping

commit 7038656d123fe1039604cfdd61e6172225ae078f
Author: Alan Woodward 
Date:   2018-02-23T15:58:07Z

Rearrange things a bit

commit 77161885359dc12ed603563c81b803e291fd11c4
Author: Alan Woodward 
Date:   2018-02-25T12:00:12Z

Tests for containing/contained_by queries

commit 855d07ee7cef804a8235ad0594cefbb1ed95b85a
Author: Alan Woodward 
Date:   2018-02-27T16:50:48Z

Add intervals to exact phrase scorer

commit 8224bf9c9cddc720a5fb42177277df8fe1a7c0d6
Author: Alan Woodward 
Date:   2018-02-27T18:39:58Z

Add intervals to sloppy phrase scorer

commit 38e422aa2d8534ff2a11623913dee7920498f3c6
Author: Alan Woodward 
Date:   2018-02-28T19:28:05Z

Add test for boolean exclusion combinations

commit 14832e93b88e75cb077a8f583dd851b4684ebcc6
Author: Alan Woodward 
Date:   2018-02-28T19:30:02Z

ScoreMode.canUseCache() -> ScoreMode.needsPositions()

commit dfd6fd723176c112970e7fc310b6a2383d4f0011
Author: Alan Woodward 
Date:   2018-02-28T20:12:24Z

Add intervals to ConjunctionScorer

commit 87a4f254d7803d6df21db66f253064bddd2a0f30
Author: Alan Woodward 
Date:   2018-02-28T22:17:26Z

Minimum-should-match

commit 53fc6b3b058370c8412850525683ca30391958db
Author: Alan Woodward 
Date:   2018-03-01T16:29:27Z

Javadocs

commit d46307bbdb3dd2b19fb7d552c5cc19bebd5df8bc
Author: Alan Woodward 
Date:   2018-03-01T17:01:13Z

cleanups

commit 96d6ba70b7da91632d2209e0a5f7769bc9f96731
Author: Alan Woodward 
Date:   2018-03-01T17:12:02Z

Expose intervals from SpanScorer

commit c411c670ba43d5dbeccc86f2fb5bcd9a28b5d896
Author: Alan Woodward 
Date:   2018-03-06T01:50:23Z

IntervalsSource

commit 9ec7abad31b81972128c1381ce7a5d1da7634751
Author: Alan Woodward 
Date:   2018-03-06T18:16:43Z

Fix nested disjunctions (LUCENE-7398)

commit a7bf7c3ab1a982f965db7b15a12b1a5883e2967f
Author: Alan Woodward 
Date:   2018-03-07T03:18:32Z

Remove slop/innerwidth, add BLOCK and MAXWIDTH

commit 60601861fafcdbe5624148525e4a8b0e5eba0c99
Author: Alan Woodward 
Date:   2018-03-07T16:04:53Z

javadocs

commit 4990685f4898752b885794c6b17abd4796a56201
Author: Alan Woodward 
Date:   2018-03-07T16:12:24Z

Field masking IntervalsSource

commit f5f60b45fee50add65e7260654ce3edf0170
Author: Alan Woodward 
Date:   2018-03-08T10:35:29Z

javadocs, fix ORDERED contract

commit 457319a20b4a85c83c06dd9b1c954584b518311e

[jira] [Commented] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392818#comment-16392818
 ] 

ASF subversion and git services commented on SOLR-12051:


Commit 3b6649faab1ed45cfd2a5507e042262691f6ea25 in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3b6649f ]

SOLR-12051: Update upgrade notes


> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392817#comment-16392817
 ] 

ASF subversion and git services commented on SOLR-12051:


Commit 4c2703e8be7deb25702f83d6371907e954f11ec1 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4c2703e ]

SOLR-12051: Update upgrade notes


> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10512) Innerjoin streaming expressions - Invalid JoinStream error

2018-03-09 Thread Markus Kalkbrenner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392772#comment-16392772
 ] 

Markus Kalkbrenner commented on SOLR-10512:
---

I now wrapped the search() expressions in select() expressions and noticed that 
I got wrong results!

But if I remove the inversion of fields I get the right results. Maybe someone 
else can have a look at this but at the moment it look like this for me:
||expression||result||
|{{innerJoin(}}
 {{  search(A),}}
 {{  search(B),}}
 {{  on="fieldA=fieldB"}}
 {{)}}| empty result => wrong result|
|{{innerJoin(}}
 {{  search(A),}}
 {{  search(B),}}
 {{  on="fieldB=fieldA"}}
 {{)}}| correct result|
|{{innerJoin(}}
 {{  select(search(A)),}}
 {{  select(search(B)),}}
 {{  on="fieldB=fieldA"}}
 {{)}}|wrong result (fieldA and fieldB have different values in the result 
tuples) |
|{{innerJoin(}}
 {{  select(search(A)),}}
 {{  select(search(B)),}}
 {{  on="fieldA=fieldB"}}
 {{)}}|correct result|

> Innerjoin streaming expressions - Invalid JoinStream error
> --
>
> Key: SOLR-10512
> URL: https://issues.apache.org/jira/browse/SOLR-10512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.4.2, 6.5
> Environment: Debian Jessie
>Reporter: Dominique Béjean
>Priority: Major
>
> It looks like innerJoin streaming expression do not work as explained in 
> documentation. An invalid JoinStream error occurs.
> {noformat}
> curl --data-urlencode 'expr=innerJoin(
> search(books, 
>q="*:*", 
>fl="id", 
>sort="id asc"),
> searchreviews, 
>q="*:*", 
>fl="id_book_s", 
>sort="id_book_s asc"), 
> on="id=id_books_s"
> )' http://localhost:8983/solr/books/stream
>   
> {"result-set":{"docs":[{"EXCEPTION":"Invalid JoinStream - all incoming stream 
> comparators (sort) must be a superset of this stream's 
> equalitor.","EOF":true}]}}   
> {noformat}
> It is tottaly similar to the documentation example
> 
> {noformat}
> innerJoin(
>   search(people, q=*:*, fl="personId,name", sort="personId asc"),
>   search(pets, q=type:cat, fl="ownerId,petName", sort="ownerId asc"),
>   on="personId=ownerId"
> )
> {noformat}
> Queries on each collection give :
> {noformat}
> $ curl --data-urlencode 'expr=search(books, 
>q="*:*", 
>fl="id, title_s, pubyear_i", 
>sort="pubyear_i asc", 
>qt="/export")' 
> http://localhost:8983/solr/books/stream
> {
>   "result-set": {
> "docs": [
>   {
> "title_s": "Friends",
> "pubyear_i": 1994,
> "id": "book2"
>   },
>   {
> "title_s": "The Way of Kings",
> "pubyear_i": 2010,
> "id": "book1"
>   },
>   {
> "EOF": true,
> "RESPONSE_TIME": 16
>   }
> ]
>   }
> }
> $ curl --data-urlencode 'expr=search(reviews, 
>q="author_s:d*", 
>fl="id, id_book_s, stars_i, review_dt", 
>sort="id_book_s asc", 
>qt="/export")' 
> http://localhost:8983/solr/reviews/stream
>  
> {
>   "result-set": {
> "docs": [
>   {
> "stars_i": 3,
> "id": "book1_c2",
> "id_book_s": "book1",
> "review_dt": "2014-03-15T12:00:00Z"
>   },
>   {
> "stars_i": 4,
> "id": "book1_c3",
> "id_book_s": "book1",
> "review_dt": "2014-12-15T12:00:00Z"
>   },
>   {
> "stars_i": 3,
> "id": "book2_c2",
> "id_book_s": "book2",
> "review_dt": "1994-03-15T12:00:00Z"
>   },
>   {
> "stars_i": 4,
> "id": "book2_c3",
> "id_book_s": "book2",
> "review_dt": "1994-12-15T12:00:00Z"
>   },
>   {
> "EOF": true,
> "RESPONSE_TIME": 47
>   }
> ]
>   }
> }
> {noformat}
> After more tests, I just had to invert the "on" clause to make it work
> {noformat}
> curl --data-urlencode 'expr=innerJoin(
> search(books, 
>q="*:*", 
>fl="id", 
>   

[JENKINS] Lucene-Solr-repro - Build # 224 - Still Unstable

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/224/

[...truncated 32 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2408/consoleText

[repro] Revision: ab4cd42903925f3edc3d06c41a4726e78a6b08ca

[repro] Repro line:  ant test  -Dtestcase=TestCloudConsistency 
-Dtests.method=testOutOfSyncReplicasCannotBecomeLeader 
-Dtests.seed=AB6A05D9948AA887 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=de -Dtests.timezone=Etc/GMT-10 -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testNodeLost -Dtests.seed=AB6A05D9948AA887 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=da -Dtests.timezone=Europe/Simferopol 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testFailedMove -Dtests.seed=AB6A05D9948AA887 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ja -Dtests.timezone=CST 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=test -Dtests.seed=AB6A05D9948AA887 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ja -Dtests.timezone=CST -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testNormalMove -Dtests.seed=AB6A05D9948AA887 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ja -Dtests.timezone=CST 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
423a8cf69cf1bf53845d82bebaa2d957464c1299
[repro] git fetch
[repro] git checkout ab4cd42903925f3edc3d06c41a4726e78a6b08ca

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestCloudConsistency
[repro]   MoveReplicaHDFSTest
[repro]   TestLargeCluster
[repro] ant compile-test

[...truncated 3292 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.TestCloudConsistency|*.MoveReplicaHDFSTest|*.TestLargeCluster" 
-Dtests.showOutput=onerror  -Dtests.seed=AB6A05D9948AA887 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=de -Dtests.timezone=Etc/GMT-10 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 23211 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.TestCloudConsistency
[repro]   1/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro] git checkout 423a8cf69cf1bf53845d82bebaa2d957464c1299

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 488 - Still Unstable

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/488/

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testAddNode

Error Message:
no MOVEREPLICA ops?

Stack Trace:
java.lang.AssertionError: no MOVEREPLICA ops?
at 
__randomizedtesting.SeedInfo.seed([1B6932A5197F72D1:BC862F06D632FDC9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testAddNode(TestLargeCluster.java:262)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testBasic

Error Message:
there should be new MOVERPLICA ops

Stack Trace:
java.lang.AssertionError: there should be new MOVERPLICA ops
at 
__randomizedtesting.SeedInfo.seed([1B6932A5197F72D1:B0932FB0C6A3F4FF]:0)
at org.junit.Assert.fail(Assert.java:93)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2409 - Still Unstable

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2409/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestLargeCluster

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 1) Thread[id=452, 
name=AutoscalingActionExecutor-35-thread-1, state=RUNNABLE, 
group=TGRP-TestLargeCluster] at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:131) at 
org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:110) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92) at 
org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74) at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91) at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$332/181639187.apply(Unknown
 Source) at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)   
  at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)  
   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
 at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)  
   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)   
  at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:298)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:287)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.removeReplica(Row.java:156)  
   at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:60)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
 at 
org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$314/1334371849.run(Unknown
 Source) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$118/314480306.run(Unknown
 Source) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
   1) Thread[id=452, name=AutoscalingActionExecutor-35-thread-1, 
state=RUNNABLE, group=TGRP-TestLargeCluster]
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:131)
at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:110)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74)
at org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:297)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$332/181639187.apply(Unknown
 Source)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 

[jira] [Commented] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392731#comment-16392731
 ] 

ASF subversion and git services commented on SOLR-12051:


Commit 4d64e7bcb14d326f8971ddec4a36624aa618aab1 in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4d64e7b ]

SOLR-12051: Election timeout when no replicas are qualified to become leader


> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-12051.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.3

> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392728#comment-16392728
 ] 

ASF subversion and git services commented on SOLR-12051:


Commit 423a8cf69cf1bf53845d82bebaa2d957464c1299 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=423a8cf ]

SOLR-12051: Election timeout when no replicas are qualified to become leader


> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-03-09 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated SOLR-11982:
---
Summary: Add support for indicating preferred replica types for queries  
(was: Add support for shards.sort parameter)

> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3, master (8.0)
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Attachments: SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12051) Election timeout when no replicas are qualified to become leader

2018-03-09 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12051:

Issue Type: Sub-task  (was: Improvement)
Parent: SOLR-12011

> Election timeout when no replicas are qualified to become leader
> 
>
> Key: SOLR-12051
> URL: https://issues.apache.org/jira/browse/SOLR-12051
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12051.patch
>
>
> After SOLR-12011 get committed, we can lead into the case when no active 
> replicas are qualified to become the leader. The only 2 solutions for users 
> in this case are
>  * Using FORCE_LEADER API
>  * Bring back the old leader
> This ticket will introduce a leader election timeout so current active 
> replicas can ignore the lost updates and go ahead to become the leader. I 
> think it will be better and not confuse users by reusing {{leaderVoteWait}} 
> (the current usage of {{leaderVoteWait}} for waiting replicas come up before 
> leader election is no longer needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1720 - Still Unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1720/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
should be at least one inactive event

Stack Trace:
java.lang.AssertionError: should be at least one inactive event
at 
__randomizedtesting.SeedInfo.seed([88F6FA6202F92EB:15A3AFD4416CB5E0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup(ScheduledMaintenanceTriggerTest.java:218)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
should be at least one inactive event

Stack Trace:
java.lang.AssertionError: should be at 

[JENKINS] Lucene-Solr-repro - Build # 223 - Unstable

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/223/

[...truncated 32 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/487/consoleText

[repro] Revision: 1c504c974e2fe30809e5b68762caa3b31ad01072

[repro] Repro line:  ant test  -Dtestcase=TestCloudConsistency 
-Dtests.method=testOutOfSyncReplicasCannotBecomeLeader 
-Dtests.seed=DC0CCBAC67BA0527 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=fr-BE -Dtests.timezone=Pacific/Marquesas -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testFailedMove -Dtests.seed=DC0CCBAC67BA0527 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=el-CY 
-Dtests.timezone=Asia/Kashgar -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
7dfb04ee5e9f973fbad20c529ec091c201743398
[repro] git fetch
[repro] git checkout 1c504c974e2fe30809e5b68762caa3b31ad01072

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   MoveReplicaHDFSTest
[repro]   TestCloudConsistency
[repro] ant compile-test

[...truncated 3310 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.MoveReplicaHDFSTest|*.TestCloudConsistency" 
-Dtests.showOutput=onerror  -Dtests.seed=DC0CCBAC67BA0527 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=el-CY -Dtests.timezone=Asia/Kashgar 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 25835 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.TestCloudConsistency
[repro]   3/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro] git checkout 7dfb04ee5e9f973fbad20c529ec091c201743398

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21601 - Still Unstable!

2018-03-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21601/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
should be at least one inactive event

Stack Trace:
java.lang.AssertionError: should be at least one inactive event
at 
__randomizedtesting.SeedInfo.seed([123836403BDEB6B7:F14F6325A9D91BC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup(ScheduledMaintenanceTriggerTest.java:218)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest.testInactiveShardCleanup

Error Message:
should be at least one inactive event

Stack Trace:
java.lang.AssertionError: should be at least one inactive event

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1498 - Still unstable

2018-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1498/

2 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([3A2FC374AAA74EC3:90E210861D749B13]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalMove

Error Message:
Error from server at https://127.0.0.1:33155/solr: delete the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:33155/solr: delete 

  1   2   >