Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12271/
Java: 32bit/jdk1.9.0-ea-b54 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (31 > 20) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (31 > 20) - we 
expect it can happen, but shouldn't easily
        at 
__randomizedtesting.SeedInfo.seed([525F488434F9F656:DA0B775E9A059BAE]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.assertTrue(Assert.java:43)
        at org.junit.Assert.assertFalse(Assert.java:68)
        at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:502)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
        at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10852 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/init-core-data-001
   [junit4]   2> 999777 T7180 oas.BaseDistributedSearchTestCase.initHostContext 
Setting hostContext system property: /k_dv/
   [junit4]   2> 999781 T7180 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2> 999781 T7181 oasc.ZkTestServer$2$1.setClientPort client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 999782 T7181 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 999881 T7180 oasc.ZkTestServer.run start zk server on 
port:41771
   [junit4]   2> 999894 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 999896 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 999898 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 999899 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 999901 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 999902 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 999904 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 999906 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 999907 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 999909 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 999911 T7180 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 999970 T7180 oas.SolrTestCaseJ4.writeCoreProperties Writing 
core.properties file to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1
   [junit4]   2> 999972 T7180 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 999973 T7180 oejs.AbstractConnector.doStart Started 
[email protected]:55770
   [junit4]   2> 999973 T7180 oascse.JettySolrRunner$1.lifeCycleStarted Jetty 
properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/tempDir-001/control/data, hostContext=/k_dv, 
hostPort=55770, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores}
   [junit4]   2> 999974 T7180 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@1764bce
   [junit4]   2> 999974 T7180 oasc.SolrResourceLoader.<init> new 
SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/'
   [junit4]   2> 999987 T7180 oasc.SolrXmlConfig.fromFile Loading container 
configuration from 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/solr.xml
   [junit4]   2> 999991 T7180 oasc.CorePropertiesLocator.<init> Config-defined 
core root directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores
   [junit4]   2> 999992 T7180 oasc.CoreContainer.<init> New CoreContainer 
13583884
   [junit4]   2> 999992 T7180 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/]
   [junit4]   2> 999992 T7180 oasc.CoreContainer.load loading shared library: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/lib
   [junit4]   2> 999992 T7180 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/lib).
   [junit4]   2> 999998 T7180 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
   [junit4]   2> 999999 T7180 oasu.UpdateShardHandler.<init> Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 1000000 T7180 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 1000000 T7180 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 1000000 T7180 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 1000000 T7180 oasc.ZkContainer.initZooKeeper Zookeeper 
client=127.0.0.1:41771/solr
   [junit4]   2> 1000001 T7180 oasc.ZkController.checkChrootPath zkHost 
includes chroot
   [junit4]   2> 1000017 T7180 N:127.0.0.1:55770_k_dv 
oasc.ZkController.createEphemeralLiveNode Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:55770_k_dv
   [junit4]   2> 1000020 T7180 N:127.0.0.1:55770_k_dv oasc.Overseer.close 
Overseer (id=null) closing
   [junit4]   2> 1000022 T7180 N:127.0.0.1:55770_k_dv 
oasc.OverseerElectionContext.runLeaderProcess I am going to be the leader 
127.0.0.1:55770_k_dv
   [junit4]   2> 1000023 T7180 N:127.0.0.1:55770_k_dv oasc.Overseer.start 
Overseer (id=93713610006396931-127.0.0.1:55770_k_dv-n_0000000000) starting
   [junit4]   2> 1000029 T7180 N:127.0.0.1:55770_k_dv 
oasc.OverseerAutoReplicaFailoverThread.<init> Starting 
OverseerAutoReplicaFailoverThread autoReplicaFailoverWorkLoopDelay=10000 
autoReplicaFailoverWaitAfterExpiration=30000 
autoReplicaFailoverBadNodeExpiration=60000
   [junit4]   2> 1000030 T7208 N:127.0.0.1:55770_k_dv 
oasc.OverseerCollectionProcessor.run Process current queue of collection 
creations
   [junit4]   2> 1000031 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run Starting to work on the main queue
   [junit4]   2> 1000041 T7180 N:127.0.0.1:55770_k_dv 
oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores
   [junit4]   2> 1000042 T7180 N:127.0.0.1:55770_k_dv 
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, 
instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1, 
collection=control_collection, 
absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1/, coreNodeName=, 
dataDir=data/, shard=}
   [junit4]   2> 1000043 T7180 N:127.0.0.1:55770_k_dv 
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1/
   [junit4]   2> 1000043 T7180 N:127.0.0.1:55770_k_dv 
oasc.CorePropertiesLocator.discover Found 1 core definitions
   [junit4]   2> 1000044 T7210 N:127.0.0.1:55770_k_dv C:control_collection 
c:collection1 oasc.ZkController.publish publishing core=collection1 state=down 
collection=control_collection
   [junit4]   2> 1000044 T7210 N:127.0.0.1:55770_k_dv C:control_collection 
c:collection1 oasc.ZkController.publish numShards not found on descriptor - 
reading it from system property
   [junit4]   2> 1000044 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1000044 T7210 N:127.0.0.1:55770_k_dv 
oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 1000045 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:55770/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:55770_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "operation":"state"} current state version: 0
   [junit4]   2> 1000046 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:55770/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:55770_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 1000046 T7207 N:127.0.0.1:55770_k_dv 
oasco.ClusterStateMutator.createCollection building a new cName: 
control_collection
   [junit4]   2> 1000046 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 1001045 T7210 N:127.0.0.1:55770_k_dv 
oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for 
collection1
   [junit4]   2> 1001045 T7210 N:127.0.0.1:55770_k_dv 
oasc.ZkController.createCollectionZkNode Check for collection 
zkNode:control_collection
   [junit4]   2> 1001046 T7210 N:127.0.0.1:55770_k_dv 
oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 1001047 T7210 N:127.0.0.1:55770_k_dv 
oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1/'
   [junit4]   2> 1001067 T7210 N:127.0.0.1:55770_k_dv oasc.Config.<init> loaded 
config solrconfig.xml with version 0 
   [junit4]   2> 1001071 T7210 N:127.0.0.1:55770_k_dv 
oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
   [junit4]   2> 1001080 T7210 N:127.0.0.1:55770_k_dv oasc.SolrConfig.<init> 
Using Lucene MatchVersion: 5.2.0
   [junit4]   2> 1001088 T7210 N:127.0.0.1:55770_k_dv oasc.SolrConfig.<init> 
Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 1001089 T7210 N:127.0.0.1:55770_k_dv 
oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
   [junit4]   2> 1001094 T7210 N:127.0.0.1:55770_k_dv 
oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 1001157 T7210 N:127.0.0.1:55770_k_dv 
oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 1001158 T7210 N:127.0.0.1:55770_k_dv 
oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 1001159 T7210 N:127.0.0.1:55770_k_dv 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 1001165 T7210 N:127.0.0.1:55770_k_dv 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 1001177 T7210 N:127.0.0.1:55770_k_dv oasc.CoreContainer.create 
Creating SolrCore 'collection1' using configuration from collection 
control_collection
   [junit4]   2> 1001178 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrCore.initDirectoryFactory solr.StandardDirectoryFactory
   [junit4]   2> 1001178 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1/], dataDir=[null]
   [junit4]   2> 1001178 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to 
JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@671107
   [junit4]   2> 1001179 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1/data
   [junit4]   2> 1001179 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrCore.getNewIndexDir New index directory detected: old=null 
new=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1/data/index/
   [junit4]   2> 1001180 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrCore.initIndex WARN [collection1] Solr index directory 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1/data/index' doesn't exist. 
Creating new index...
   [junit4]   2> 1001180 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/control-001/cores/collection1/data/index
   [junit4]   2> 1001180 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=13, maxMergeAtOnceExplicit=19, maxMergedSegmentMB=94.0986328125, 
floorSegmentMB=0.2890625, forceMergeDeletesPctAllowed=13.701589579781455, 
segmentsPerTier=44.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.7045733024627635
   [junit4]   2> 1001205 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
525F488434F9F656-001/control-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 1001205 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 1001209 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"nodistrib"
   [junit4]   2> 1001209 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"dedupe"
   [junit4]   2> 1001209 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
   [junit4]   2> 1001209 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"stored_sig"
   [junit4]   2> 1001210 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
   [junit4]   2> 1001210 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-explicit"
   [junit4]   2> 1001210 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 1001210 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 1001210 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined 
as default, creating implicit default
   [junit4]   2> 1001212 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1001213 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1001214 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1001214 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1001220 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.RequestHandlers.initHandlersFromConfig Registered paths: 
/admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
   [junit4]   2> 1001221 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrCore.initStatsCache Using default statsCache cache: 
org.apache.solr.search.stats.LocalStatsCache
   [junit4]   2> 1001221 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasu.UpdateHandler.<init> Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 1001222 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10
   [junit4]   2> 1001222 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 1001222 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 1001223 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=25, maxMergeAtOnceExplicit=12, maxMergedSegmentMB=45.830078125, 
floorSegmentMB=1.775390625, forceMergeDeletesPctAllowed=0.602195761569958, 
segmentsPerTier=38.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0
   [junit4]   2> 1001224 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
525F488434F9F656-001/control-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 1001224 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 1001225 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oass.SolrIndexSearcher.<init> Opening Searcher@2acff7[collection1] main
   [junit4]   2> 1001226 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for 
the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 1001226 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured 
ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 1001226 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 1001226 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasr.ManagedResourceStorage.load Reading _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1001227 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found 
for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 1001227 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1001227 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 1001227 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oash.ReplicationHandler.inform Commits will be reserved for  10000
   [junit4]   2> 1001228 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.ZkController.getConfDirListeners watch zkdir /configs/conf1
   [junit4]   2> 1001228 T7211 N:127.0.0.1:55770_k_dv c:collection1 
oasc.SolrCore.registerSearcher [collection1] Registered new searcher 
Searcher@2acff7[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1001228 T7210 N:127.0.0.1:55770_k_dv c:collection1 
oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 1001229 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ZkController.register Register replica - 
core:collection1 address:http://127.0.0.1:55770/k_dv 
collection:control_collection shard:shard1
   [junit4]   2> 1001229 T7180 N:127.0.0.1:55770_k_dv 
oass.SolrDispatchFilter.init 
user.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0
   [junit4]   2> 1001230 T7180 N:127.0.0.1:55770_k_dv 
oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 1001233 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess Running 
the leader process for shard shard1
   [junit4]   2> 1001235 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1001236 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp 
Enough replicas found to continue.
   [junit4]   2> 1001236 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I may 
be the new leader - try and sync
   [junit4]   2> ASYNC  NEW_CORE C18696 name=collection1 
org.apache.solr.core.SolrCore@1273706 
url=http://127.0.0.1:55770/k_dv/collection1 node=127.0.0.1:55770_k_dv 
C18696_STATE=coll:control_collection core:collection1 props:{core=collection1, 
base_url=http://127.0.0.1:55770/k_dv, node_name=127.0.0.1:55770_k_dv, 
state=down}
   [junit4]   2> 1001236 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 C18696 oasc.SyncStrategy.sync Sync replicas to 
http://127.0.0.1:55770/k_dv/collection1/
   [junit4]   2> 1001236 T7180 oasc.ChaosMonkey.monkeyLog monkey: init - expire 
sessions:false cause connection loss:false
   [junit4]   2> 1001236 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "operation":"leader",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"control_collection"} current state 
version: 1
   [junit4]   2> 1001236 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 C18696 oasc.SyncStrategy.syncReplicas Sync Success - now 
sync replicas to me
   [junit4]   2> 1001237 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 C18696 oasc.SyncStrategy.syncToMe 
http://127.0.0.1:55770/k_dv/collection1/ has no replicas
   [junit4]   2> 1001237 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am 
the new leader: http://127.0.0.1:55770/k_dv/collection1/ shard1
   [junit4]   2> 1001241 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1001242 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "operation":"leader",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "base_url":"http://127.0.0.1:55770/k_dv";,
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "state":"active"} current state version: 1
   [junit4]   2> 1001297 T7180 oas.SolrTestCaseJ4.writeCoreProperties Writing 
core.properties file to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1
   [junit4]   2> 1001298 T7180 oasc.AbstractFullDistribZkTestBase.createJettys 
create jetty 1 in directory 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001
   [junit4]   2> 1001299 T7180 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 1001300 T7180 oejs.AbstractConnector.doStart Started 
[email protected]:42762
   [junit4]   2> 1001300 T7180 oascse.JettySolrRunner$1.lifeCycleStarted Jetty 
properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/tempDir-001/jetty1, solrconfig=solrconfig.xml, 
hostContext=/k_dv, hostPort=42762, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores}
   [junit4]   2> 1001300 T7180 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@1764bce
   [junit4]   2> 1001300 T7180 oasc.SolrResourceLoader.<init> new 
SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/'
   [junit4]   2> 1001314 T7180 oasc.SolrXmlConfig.fromFile Loading container 
configuration from 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/solr.xml
   [junit4]   2> 1001318 T7180 oasc.CorePropertiesLocator.<init> Config-defined 
core root directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores
   [junit4]   2> 1001319 T7180 oasc.CoreContainer.<init> New CoreContainer 
20021450
   [junit4]   2> 1001319 T7180 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/]
   [junit4]   2> 1001319 T7180 oasc.CoreContainer.load loading shared library: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/lib
   [junit4]   2> 1001319 T7180 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/lib).
   [junit4]   2> 1001324 T7180 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
   [junit4]   2> 1001326 T7180 oasu.UpdateShardHandler.<init> Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 1001326 T7180 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 1001326 T7180 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 1001327 T7180 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 1001327 T7180 oasc.ZkContainer.initZooKeeper Zookeeper 
client=127.0.0.1:41771/solr
   [junit4]   2> 1001327 T7180 oasc.ZkController.checkChrootPath zkHost 
includes chroot
   [junit4]   2> 1001394 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ZkController.register We are 
http://127.0.0.1:55770/k_dv/collection1/ and leader is 
http://127.0.0.1:55770/k_dv/collection1/
   [junit4]   2> 1001394 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ZkController.register No LogReplay needed for 
core=collection1 baseURL=http://127.0.0.1:55770/k_dv
   [junit4]   2> 1001394 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ZkController.checkRecovery I am the leader, no 
recovery necessary
   [junit4]   2> 1001395 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ZkController.publish publishing core=collection1 
state=active collection=control_collection
   [junit4]   2> 1001395 T7214 N:127.0.0.1:55770_k_dv C:control_collection 
S:shard1 c:collection1 oasc.ZkController.publish numShards not found on 
descriptor - reading it from system property
   [junit4]   2> 1001396 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1001397 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:55770/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:55770_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"active",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "operation":"state"} current state version: 2
   [junit4]   2> 1001397 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:55770/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:55770_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"active",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 1002338 T7180 N:127.0.0.1:42762_k_dv 
oasc.ZkController.createEphemeralLiveNode Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:42762_k_dv
   [junit4]   2> 1002340 T7180 N:127.0.0.1:42762_k_dv oasc.Overseer.close 
Overseer (id=null) closing
   [junit4]   2> 1002342 T7180 N:127.0.0.1:42762_k_dv 
oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores
   [junit4]   2> 1002342 T7180 N:127.0.0.1:42762_k_dv 
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, 
instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1, collection=collection1, 
absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1/, coreNodeName=, 
dataDir=data/, shard=}
   [junit4]   2> 1002343 T7180 N:127.0.0.1:42762_k_dv 
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1/
   [junit4]   2> 1002343 T7180 N:127.0.0.1:42762_k_dv 
oasc.CorePropertiesLocator.discover Found 1 core definitions
   [junit4]   2> 1002344 T7233 N:127.0.0.1:42762_k_dv C:collection1 
c:collection1 oasc.ZkController.publish publishing core=collection1 state=down 
collection=collection1
   [junit4]   2> 1002344 T7233 N:127.0.0.1:42762_k_dv C:collection1 
c:collection1 oasc.ZkController.publish numShards not found on descriptor - 
reading it from system property
   [junit4]   2> 1002344 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1002344 T7233 N:127.0.0.1:42762_k_dv 
oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 1002346 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:42762/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:42762_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"} current state version: 3
   [junit4]   2> 1002346 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:42762/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:42762_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 1002347 T7207 N:127.0.0.1:55770_k_dv 
oasco.ClusterStateMutator.createCollection building a new cName: collection1
   [junit4]   2> 1002347 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 1003345 T7233 N:127.0.0.1:42762_k_dv 
oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for 
collection1
   [junit4]   2> 1003345 T7233 N:127.0.0.1:42762_k_dv 
oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
   [junit4]   2> 1003347 T7233 N:127.0.0.1:42762_k_dv 
oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 1003347 T7233 N:127.0.0.1:42762_k_dv 
oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1/'
   [junit4]   2> 1003363 T7233 N:127.0.0.1:42762_k_dv oasc.Config.<init> loaded 
config solrconfig.xml with version 0 
   [junit4]   2> 1003369 T7233 N:127.0.0.1:42762_k_dv 
oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
   [junit4]   2> 1003372 T7233 N:127.0.0.1:42762_k_dv oasc.SolrConfig.<init> 
Using Lucene MatchVersion: 5.2.0
   [junit4]   2> 1003381 T7233 N:127.0.0.1:42762_k_dv oasc.SolrConfig.<init> 
Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 1003382 T7233 N:127.0.0.1:42762_k_dv 
oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
   [junit4]   2> 1003387 T7233 N:127.0.0.1:42762_k_dv 
oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 1003462 T7233 N:127.0.0.1:42762_k_dv 
oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 1003464 T7233 N:127.0.0.1:42762_k_dv 
oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 1003465 T7233 N:127.0.0.1:42762_k_dv 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 1003467 T7233 N:127.0.0.1:42762_k_dv 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 1003479 T7233 N:127.0.0.1:42762_k_dv oasc.CoreContainer.create 
Creating SolrCore 'collection1' using configuration from collection collection1
   [junit4]   2> 1003480 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrCore.initDirectoryFactory solr.StandardDirectoryFactory
   [junit4]   2> 1003480 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1/], dataDir=[null]
   [junit4]   2> 1003480 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to 
JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@671107
   [junit4]   2> 1003481 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1/data
   [junit4]   2> 1003481 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrCore.getNewIndexDir New index directory detected: old=null 
new=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1/data/index/
   [junit4]   2> 1003481 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrCore.initIndex WARN [collection1] Solr index directory 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1/data/index' doesn't exist. 
Creating new index...
   [junit4]   2> 1003482 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-1-001/cores/collection1/data/index
   [junit4]   2> 1003482 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=13, maxMergeAtOnceExplicit=19, maxMergedSegmentMB=94.0986328125, 
floorSegmentMB=0.2890625, forceMergeDeletesPctAllowed=13.701589579781455, 
segmentsPerTier=44.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.7045733024627635
   [junit4]   2> 1003512 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
525F488434F9F656-001/shard-1-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 1003512 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 1003516 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"nodistrib"
   [junit4]   2> 1003517 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"dedupe"
   [junit4]   2> 1003517 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
   [junit4]   2> 1003517 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"stored_sig"
   [junit4]   2> 1003518 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
   [junit4]   2> 1003518 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-explicit"
   [junit4]   2> 1003518 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 1003518 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 1003518 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined 
as default, creating implicit default
   [junit4]   2> 1003520 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1003521 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1003522 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1003523 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1003530 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.RequestHandlers.initHandlersFromConfig Registered paths: 
/admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
   [junit4]   2> 1003531 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrCore.initStatsCache Using default statsCache cache: 
org.apache.solr.search.stats.LocalStatsCache
   [junit4]   2> 1003531 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasu.UpdateHandler.<init> Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 1003531 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10
   [junit4]   2> 1003532 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 1003532 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 1003533 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=25, maxMergeAtOnceExplicit=12, maxMergedSegmentMB=45.830078125, 
floorSegmentMB=1.775390625, forceMergeDeletesPctAllowed=0.602195761569958, 
segmentsPerTier=38.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0
   [junit4]   2> 1003534 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
525F488434F9F656-001/shard-1-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 1003535 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 1003535 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oass.SolrIndexSearcher.<init> Opening Searcher@12ab141[collection1] main
   [junit4]   2> 1003536 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for 
the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 1003536 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured 
ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 1003537 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 1003537 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasr.ManagedResourceStorage.load Reading _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1003538 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found 
for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 1003538 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1003538 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 1003539 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oash.ReplicationHandler.inform Commits will be reserved for  10000
   [junit4]   2> 1003540 T7234 N:127.0.0.1:42762_k_dv c:collection1 
oasc.SolrCore.registerSearcher [collection1] Registered new searcher 
Searcher@12ab141[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1003540 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.ZkController.getConfDirListeners watch zkdir /configs/conf1
   [junit4]   2> 1003541 T7233 N:127.0.0.1:42762_k_dv c:collection1 
oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 1003541 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.register Register replica - core:collection1 
address:http://127.0.0.1:42762/k_dv collection:collection1 shard:shard1
   [junit4]   2> 1003542 T7180 N:127.0.0.1:42762_k_dv 
oass.SolrDispatchFilter.init 
user.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0
   [junit4]   2> 1003542 T7180 N:127.0.0.1:42762_k_dv 
oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 1003545 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess Running the 
leader process for shard shard1
   [junit4]   2> 1003547 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1003547 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp Enough 
replicas found to continue.
   [junit4]   2> 1003548 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I may be the new 
leader - try and sync
   [junit4]   2> 1003548 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "operation":"leader",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1"} current state version: 4
   [junit4]   2> ASYNC  NEW_CORE C18697 name=collection1 
org.apache.solr.core.SolrCore@45e108 
url=http://127.0.0.1:42762/k_dv/collection1 node=127.0.0.1:42762_k_dv 
C18697_STATE=coll:collection1 core:collection1 props:{core=collection1, 
base_url=http://127.0.0.1:42762/k_dv, node_name=127.0.0.1:42762_k_dv, 
state=down}
   [junit4]   2> 1003548 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 C18697 oasc.SyncStrategy.sync Sync replicas to 
http://127.0.0.1:42762/k_dv/collection1/
   [junit4]   2> 1003548 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 C18697 oasc.SyncStrategy.syncReplicas Sync Success - now sync 
replicas to me
   [junit4]   2> 1003549 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 C18697 oasc.SyncStrategy.syncToMe 
http://127.0.0.1:42762/k_dv/collection1/ has no replicas
   [junit4]   2> 1003549 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am the new 
leader: http://127.0.0.1:42762/k_dv/collection1/ shard1
   [junit4]   2> 1003558 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1003559 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "operation":"leader",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "base_url":"http://127.0.0.1:42762/k_dv";,
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "state":"active"} current state version: 4
   [junit4]   2> 1003602 T7180 oas.SolrTestCaseJ4.writeCoreProperties Writing 
core.properties file to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1
   [junit4]   2> 1003603 T7180 oasc.AbstractFullDistribZkTestBase.createJettys 
create jetty 2 in directory 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001
   [junit4]   2> 1003604 T7180 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 1003605 T7180 oejs.AbstractConnector.doStart Started 
[email protected]:33886
   [junit4]   2> 1003605 T7180 oascse.JettySolrRunner$1.lifeCycleStarted Jetty 
properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/tempDir-001/jetty2, solrconfig=solrconfig.xml, 
hostContext=/k_dv, hostPort=33886, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores}
   [junit4]   2> 1003606 T7180 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@1764bce
   [junit4]   2> 1003606 T7180 oasc.SolrResourceLoader.<init> new 
SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/'
   [junit4]   2> 1003620 T7180 oasc.SolrXmlConfig.fromFile Loading container 
configuration from 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/solr.xml
   [junit4]   2> 1003625 T7180 oasc.CorePropertiesLocator.<init> Config-defined 
core root directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores
   [junit4]   2> 1003625 T7180 oasc.CoreContainer.<init> New CoreContainer 
5814174
   [junit4]   2> 1003625 T7180 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/]
   [junit4]   2> 1003626 T7180 oasc.CoreContainer.load loading shared library: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/lib
   [junit4]   2> 1003626 T7180 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/lib).
   [junit4]   2> 1003631 T7180 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
   [junit4]   2> 1003632 T7180 oasu.UpdateShardHandler.<init> Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 1003633 T7180 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 1003633 T7180 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 1003634 T7180 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 1003634 T7180 oasc.ZkContainer.initZooKeeper Zookeeper 
client=127.0.0.1:41771/solr
   [junit4]   2> 1003634 T7180 oasc.ZkController.checkChrootPath zkHost 
includes chroot
   [junit4]   2> 1003703 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.register We are 
http://127.0.0.1:42762/k_dv/collection1/ and leader is 
http://127.0.0.1:42762/k_dv/collection1/
   [junit4]   2> 1003703 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.register No LogReplay needed for 
core=collection1 baseURL=http://127.0.0.1:42762/k_dv
   [junit4]   2> 1003703 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.checkRecovery I am the leader, no recovery 
necessary
   [junit4]   2> 1003703 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.publish publishing core=collection1 
state=active collection=collection1
   [junit4]   2> 1003704 T7237 N:127.0.0.1:42762_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.publish numShards not found on descriptor - 
reading it from system property
   [junit4]   2> 1003705 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1003705 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:42762/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:42762_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"active",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"} current state version: 5
   [junit4]   2> 1003706 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:42762/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:42762_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"active",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 1004646 T7180 N:127.0.0.1:33886_k_dv 
oasc.ZkController.createEphemeralLiveNode Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:33886_k_dv
   [junit4]   2> 1004649 T7180 N:127.0.0.1:33886_k_dv oasc.Overseer.close 
Overseer (id=null) closing
   [junit4]   2> 1004650 T7180 N:127.0.0.1:33886_k_dv 
oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores
   [junit4]   2> 1004651 T7180 N:127.0.0.1:33886_k_dv 
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, 
instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1, collection=collection1, 
absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1/, coreNodeName=, 
dataDir=data/, shard=}
   [junit4]   2> 1004652 T7180 N:127.0.0.1:33886_k_dv 
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1/
   [junit4]   2> 1004652 T7180 N:127.0.0.1:33886_k_dv 
oasc.CorePropertiesLocator.discover Found 1 core definitions
   [junit4]   2> 1004653 T7253 N:127.0.0.1:33886_k_dv C:collection1 
c:collection1 oasc.ZkController.publish publishing core=collection1 state=down 
collection=collection1
   [junit4]   2> 1004653 T7253 N:127.0.0.1:33886_k_dv C:collection1 
c:collection1 oasc.ZkController.publish numShards not found on descriptor - 
reading it from system property
   [junit4]   2> 1004654 T7253 N:127.0.0.1:33886_k_dv 
oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 1004654 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1004655 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:33886/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:33886_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"} current state version: 6
   [junit4]   2> 1004655 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:33886/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:33886_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 1004656 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Collection already exists with numShards=1
   [junit4]   2> 1004656 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 1005654 T7253 N:127.0.0.1:33886_k_dv 
oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for 
collection1
   [junit4]   2> 1005654 T7253 N:127.0.0.1:33886_k_dv 
oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
   [junit4]   2> 1005655 T7253 N:127.0.0.1:33886_k_dv 
oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 1005656 T7253 N:127.0.0.1:33886_k_dv 
oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1/'
   [junit4]   2> 1005670 T7253 N:127.0.0.1:33886_k_dv oasc.Config.<init> loaded 
config solrconfig.xml with version 0 
   [junit4]   2> 1005675 T7253 N:127.0.0.1:33886_k_dv 
oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
   [junit4]   2> 1005679 T7253 N:127.0.0.1:33886_k_dv oasc.SolrConfig.<init> 
Using Lucene MatchVersion: 5.2.0
   [junit4]   2> 1005687 T7253 N:127.0.0.1:33886_k_dv oasc.SolrConfig.<init> 
Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 1005688 T7253 N:127.0.0.1:33886_k_dv 
oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
   [junit4]   2> 1005692 T7253 N:127.0.0.1:33886_k_dv 
oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 1005771 T7253 N:127.0.0.1:33886_k_dv 
oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 1005773 T7253 N:127.0.0.1:33886_k_dv 
oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 1005774 T7253 N:127.0.0.1:33886_k_dv 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 1005776 T7253 N:127.0.0.1:33886_k_dv 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 1005787 T7253 N:127.0.0.1:33886_k_dv oasc.CoreContainer.create 
Creating SolrCore 'collection1' using configuration from collection collection1
   [junit4]   2> 1005788 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrCore.initDirectoryFactory solr.StandardDirectoryFactory
   [junit4]   2> 1005788 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1/], dataDir=[null]
   [junit4]   2> 1005789 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to 
JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@671107
   [junit4]   2> 1005789 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1/data
   [junit4]   2> 1005790 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrCore.getNewIndexDir New index directory detected: old=null 
new=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1/data/index/
   [junit4]   2> 1005790 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrCore.initIndex WARN [collection1] Solr index directory 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1/data/index' doesn't exist. 
Creating new index...
   [junit4]   2> 1005791 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-2-001/cores/collection1/data/index
   [junit4]   2> 1005791 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=13, maxMergeAtOnceExplicit=19, maxMergedSegmentMB=94.0986328125, 
floorSegmentMB=0.2890625, forceMergeDeletesPctAllowed=13.701589579781455, 
segmentsPerTier=44.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.7045733024627635
   [junit4]   2> 1005814 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
525F488434F9F656-001/shard-2-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 1005815 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 1005819 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"nodistrib"
   [junit4]   2> 1005819 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"dedupe"
   [junit4]   2> 1005820 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
   [junit4]   2> 1005820 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"stored_sig"
   [junit4]   2> 1005820 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
   [junit4]   2> 1005820 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-explicit"
   [junit4]   2> 1005821 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 1005821 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 1005821 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined 
as default, creating implicit default
   [junit4]   2> 1005823 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1005824 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1005825 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1005827 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 1005833 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.RequestHandlers.initHandlersFromConfig Registered paths: 
/admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
   [junit4]   2> 1005834 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrCore.initStatsCache Using default statsCache cache: 
org.apache.solr.search.stats.LocalStatsCache
   [junit4]   2> 1005834 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasu.UpdateHandler.<init> Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 1005835 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10
   [junit4]   2> 1005835 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 1005835 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 1005836 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=25, maxMergeAtOnceExplicit=12, maxMergedSegmentMB=45.830078125, 
floorSegmentMB=1.775390625, forceMergeDeletesPctAllowed=0.602195761569958, 
segmentsPerTier=38.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0
   [junit4]   2> 1005837 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
525F488434F9F656-001/shard-2-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 1005837 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 1005838 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oass.SolrIndexSearcher.<init> Opening Searcher@1cbd320[collection1] main
   [junit4]   2> 1005839 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for 
the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 1005839 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured 
ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 1005839 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 1005840 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasr.ManagedResourceStorage.load Reading _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1005840 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found 
for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 1005840 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1005840 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 1005841 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oash.ReplicationHandler.inform Commits will be reserved for  10000
   [junit4]   2> 1005842 T7254 N:127.0.0.1:33886_k_dv c:collection1 
oasc.SolrCore.registerSearcher [collection1] Registered new searcher 
Searcher@1cbd320[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1005842 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.ZkController.getConfDirListeners watch zkdir /configs/conf1
   [junit4]   2> 1005843 T7253 N:127.0.0.1:33886_k_dv c:collection1 
oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 1005843 T7257 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.register Register replica - core:collection1 
address:http://127.0.0.1:33886/k_dv collection:collection1 shard:shard1
   [junit4]   2> 1005844 T7180 N:127.0.0.1:33886_k_dv 
oass.SolrDispatchFilter.init 
user.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0
   [junit4]   2> 1005844 T7180 N:127.0.0.1:33886_k_dv 
oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 1005846 T7257 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.register We are 
http://127.0.0.1:33886/k_dv/collection1/ and leader is 
http://127.0.0.1:42762/k_dv/collection1/
   [junit4]   2> 1005846 T7257 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.register No LogReplay needed for 
core=collection1 baseURL=http://127.0.0.1:33886/k_dv
   [junit4]   2> 1005846 T7257 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 oasc.ZkController.checkRecovery Core needs to recover:collection1
   [junit4]   2> 1005846 T7257 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 oasu.DefaultSolrCoreState.doRecovery Running recovery - first 
canceling any ongoing recovery
   [junit4]   2> ASYNC  NEW_CORE C18698 name=collection1 
org.apache.solr.core.SolrCore@1705782 
url=http://127.0.0.1:33886/k_dv/collection1 node=127.0.0.1:33886_k_dv 
C18698_STATE=coll:collection1 core:collection1 props:{core=collection1, 
base_url=http://127.0.0.1:33886/k_dv, node_name=127.0.0.1:33886_k_dv, 
state=down}
   [junit4]   2> 1005847 T7258 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 C18698 oasc.RecoveryStrategy.run Starting recovery process.  
core=collection1 recoveringAfterStartup=true
   [junit4]   2> 1005848 T7258 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 C18698 oasc.RecoveryStrategy.doRecovery ###### startupVersions=[]
   [junit4]   2> 1005848 T7258 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 C18698 oasc.RecoveryStrategy.doRecovery Publishing state of core 
collection1 as recovering, leader is http://127.0.0.1:42762/k_dv/collection1/ 
and I am http://127.0.0.1:33886/k_dv/collection1/
   [junit4]   2> 1005848 T7258 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 C18698 oasc.ZkController.publish publishing core=collection1 
state=recovering collection=collection1
   [junit4]   2> 1005848 T7258 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 C18698 oasc.ZkController.publish numShards not found on 
descriptor - reading it from system property
   [junit4]   2> 1005849 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1005850 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node2",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:33886/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:33886_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"recovering",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"} current state version: 7
   [junit4]   2> 1005850 T7258 N:127.0.0.1:33886_k_dv C:collection1 S:shard1 
c:collection1 C18698 oasc.RecoveryStrategy.sendPrepRecoveryCmd Sending prep 
recovery command to http://127.0.0.1:42762/k_dv; WaitForState: 
action=PREPRECOVERY&core=collection1&nodeName=127.0.0.1%3A33886_k_dv&coreNodeName=core_node2&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true
   [junit4]   2> 1005851 T7207 N:127.0.0.1:55770_k_dv 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node2",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:33886/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:33886_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"recovering",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 1005852 T7225 N:127.0.0.1:42762_k_dv 
oasha.CoreAdminHandler.handleWaitForStateAction Going to wait for coreNodeName: 
core_node2, state: recovering, checkLive: true, onlyIfLeader: true, 
onlyIfLeaderActive: true
   [junit4]   2> 1005854 T7225 N:127.0.0.1:42762_k_dv 
oasha.CoreAdminHandler.handleWaitForStateAction Will wait a max of 183 seconds 
to see collection1 (shard1 of collection1) have state: recovering
   [junit4]   2> 1005854 T7225 N:127.0.0.1:42762_k_dv 
oasha.CoreAdminHandler.handleWaitForStateAction In WaitForState(recovering): 
collection=collection1, shard=shard1, thisCore=collection1, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=down, localState=active, nodeName=127.0.0.1:33886_k_dv, 
coreNodeName=core_node2, onlyIfActiveCheckResult=false, nodeProps: 
core_node2:{"core":"collection1","base_url":"http://127.0.0.1:33886/k_dv","node_name":"127.0.0.1:33886_k_dv","state":"down"}
   [junit4]   2> 1005907 T7180 oas.SolrTestCaseJ4.writeCoreProperties Writing 
core.properties file to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1
   [junit4]   2> 1005908 T7180 oasc.AbstractFullDistribZkTestBase.createJettys 
create jetty 3 in directory 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001
   [junit4]   2> 1005909 T7180 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 1005910 T7180 oejs.AbstractConnector.doStart Started 
[email protected]:35557
   [junit4]   2> 1005910 T7180 oascse.JettySolrRunner$1.lifeCycleStarted Jetty 
properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/tempDir-001/jetty3, solrconfig=solrconfig.xml, 
hostContext=/k_dv, hostPort=35557, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores}
   [junit4]   2> 1005911 T7180 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@1764bce
   [junit4]   2> 1005911 T7180 oasc.SolrResourceLoader.<init> new 
SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/'
   [junit4]   2> 1005930 T7180 oasc.SolrXmlConfig.fromFile Loading container 
configuration from 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/solr.xml
   [junit4]   2> 1005934 T7180 oasc.CorePropertiesLocator.<init> Config-defined 
core root directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores
   [junit4]   2> 1005934 T7180 oasc.CoreContainer.<init> New CoreContainer 
32888650
   [junit4]   2> 1005934 T7180 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/]
   [junit4]   2> 1005935 T7180 oasc.CoreContainer.load loading shared library: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/lib
   [junit4]   2> 1005935 T7180 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/lib).
   [junit4]   2> 1005940 T7180 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
   [junit4]   2> 1005941 T7180 oasu.UpdateShardHandler.<init> Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 1005942 T7180 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 1005942 T7180 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 1005942 T7180 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 1005943 T7180 oasc.ZkContainer.initZooKeeper Zookeeper 
client=127.0.0.1:41771/solr
   [junit4]   2> 1005943 T7180 oasc.ZkController.checkChrootPath zkHost 
includes chroot
   [junit4]   2> 1006855 T7225 N:127.0.0.1:42762_k_dv 
oasha.CoreAdminHandler.handleWaitForStateAction In WaitForState(recovering): 
collection=collection1, shard=shard1, thisCore=collection1, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=recovering, localState=active, nodeName=127.0.0.1:33886_k_dv, 
coreNodeName=core_node2, onlyIfActiveCheckResult=false, nodeProps: 
core_node2:{"core":"collection1","base_url":"http://127.0.0.1:33886/k_dv","node_name":"127.0.0.1:33886_k_dv","state":"recovering"}
   [junit4]   2> 1006856 T7225 N:127.0.0.1:42762_k_dv 
oasha.CoreAdminHandler.handleWaitForStateAction Waited coreNodeName: 
core_node2, state: recovering, checkLive: true, onlyIfLeader: true for: 1 
seconds.
   [junit4]   2> 1006856 T7225 N:127.0.0.1:42762_k_dv 
oass.SolrDispatchFilter.handleAdminRequest [admin] webapp=null 
path=/admin/cores 
params={nodeName=127.0.0.1:33886_k_dv&onlyIfLeaderActive=true&core=collection1&coreNodeName=core_node2&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2}
 status=0 QTime=1004 
   [junit4]   2> 1006957 T7180 N:127.0.0.1:35557_k_dv 
oasc.ZkController.createEphemeralLiveNode Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:35557_k_dv
   [junit4]   2> 1006959 T7180 N:127.0.0.1:35557_k_dv oasc.Overseer.close 
Overseer (id=null) closing
   [junit4]   2> 1006961 T7180 N:127.0.0.1:35557_k_dv 
oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores
   [junit4]   2> 1006961 T7180 N:127.0.0.1:35557_k_dv 
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, 
instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1, collection=collection1, 
absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1/, coreNodeName=, 
dataDir=data/, shard=}
   [junit4]   2> 1006962 T7180 N:127.0.0.1:35557_k_dv 
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1/
   [junit4]   2> 1006962 T7180 N:127.0.0.1:35557_k_dv 
oasc.CorePropertiesLocator.discover Found 1 core definitions
   [junit4]   2> 1006963 T7275 N:127.0.0.1:35557_k_dv C:collection1 
c:collection1 oasc.ZkController.publish publishing core=collection1 state=down 
collection=collection1
   [junit4]   2> 1006963 T7275 N:127.0.0.1:35557_k_dv C:collection1 
c:collection1 oasc.ZkController.publish numShards not found on descriptor - 
reading it from system property
   [junit4]   2> 1006964 T7275 N:127.0.0.1:35557_k_dv 
oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 1006964 T7206 N:127.0.0.1:55770_k_dv 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 1006965 T7207 N:127.0.0.1:55770_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:35557/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:35557_k_dv",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [juni

[...truncated too long message...]

ard1 c:collection1 oasu.DefaultSolrCoreState.closeIndexWriter closing 
IndexWriter with IndexWriterCloser
   [junit4]   2> 1042265 T7180 C:control_collection S:shard1 c:collection1 
oasc.SolrCore.closeSearcher [collection1] Closing main searcher on request.
   [junit4]   2> 1042265 T7317 N:127.0.0.1:35557_k_dv 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node3",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:35557/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:35557_k_dv",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"} current state version: 18
   [junit4]   2> 1042265 T7317 N:127.0.0.1:35557_k_dv 
oasco.ReplicaMutator.updateState Update state numShards=null message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node3",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:35557/k_dv";,
   [junit4]   2>          "node_name":"127.0.0.1:35557_k_dv",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 1042282 T7180 C:control_collection S:shard1 c:collection1 
oasc.CachingDirectoryFactory.close Closing StandardDirectoryFactory - 2 
directories currently being tracked
   [junit4]   2> 1042282 T7180 C:control_collection S:shard1 c:collection1 
oasc.CachingDirectoryFactory.closeCacheValue looking to close 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1/data/index 
[CachedDir<<refCount=0;path=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1/data/index;done=false>>]
   [junit4]   2> 1042283 T7180 C:control_collection S:shard1 c:collection1 
oasc.CachingDirectoryFactory.close Closing directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1/data/index
   [junit4]   2> 1042283 T7180 C:control_collection S:shard1 c:collection1 
oasc.CachingDirectoryFactory.closeCacheValue looking to close 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1/data 
[CachedDir<<refCount=0;path=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1/data;done=false>>]
   [junit4]   2> 1042283 T7180 C:control_collection S:shard1 c:collection1 
oasc.CachingDirectoryFactory.close Closing directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001/shard-3-001/cores/collection1/data
   [junit4]   2> 1042284 T7180 C:control_collection S:shard1 c:collection1 
oasc.Overseer.close Overseer 
(id=93713610006396938-127.0.0.1:35557_k_dv-n_0000000003) closing
   [junit4]   2> 1042284 T7317 N:127.0.0.1:35557_k_dv 
oasc.Overseer$ClusterStateUpdater.run Overseer Loop exiting : 
127.0.0.1:35557_k_dv
   [junit4]   2> 1043806 T7274 N:127.0.0.1:35557_k_dv 
oascc.ZkStateReader$3.process WARN ZooKeeper watch triggered, but Solr cannot 
talk to ZK
   [junit4]   2> 1043838 T7180 oejsh.ContextHandler.doStop stopped 
o.e.j.s.ServletContextHandler{/k_dv,null}
   [junit4]   2> 1044028 T7180 C:control_collection S:shard1 c:collection1 
oasc.ZkTestServer.send4LetterWord connecting to 127.0.0.1:41771 41771
   [junit4]   2> 1044134 T7301 oasc.ZkTestServer.send4LetterWord connecting to 
127.0.0.1:41771 41771
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=ChaosMonkeyNothingIsSafeTest -Dtests.method=test 
-Dtests.seed=525F488434F9F656 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=hr -Dtests.timezone=SystemV/YST9 -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 44.4s J0 | ChaosMonkeyNothingIsSafeTest.test <<<
   [junit4]    > Throwable #1: java.lang.AssertionError: There were too many 
update fails (31 > 20) - we expect it can happen, but shouldn't easily
   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([525F488434F9F656:DA0B775E9A059BAE]:0)
   [junit4]    >        at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
   [junit4]    >        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
   [junit4]    >        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
   [junit4]    >        at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 1044141 T7180 C:control_collection S:shard1 c:collection1 
oas.SolrTestCaseJ4.deleteCore ###deleteCore
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 525F488434F9F656-001
   [junit4]   2> 44367 T7179 ccr.ThreadLeakControl.checkThreadLeaks WARNING 
Will linger awaiting termination of 1 leaked thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene50): 
{rnd_b=PostingsFormat(name=Memory doPackFST= false), 
_version_=PostingsFormat(name=Memory doPackFST= true), 
a_t=PostingsFormat(name=LuceneFixedGap), a_i=PostingsFormat(name=Memory 
doPackFST= false), id=PostingsFormat(name=Memory doPackFST= false)}, 
docValues:{}, sim=RandomSimilarityProvider(queryNorm=false,coord=yes): {}, 
locale=hr, timezone=SystemV/YST9
   [junit4]   2> NOTE: Linux 3.13.0-49-generic i386/Oracle Corporation 1.9.0-ea 
(32-bit)/cpus=12,threads=1,free=349663096,total=518979584
   [junit4]   2> NOTE: All tests run in this JVM: [TestJmxIntegration, 
TestRandomFaceting, BufferStoreTest, TestAnalyzeInfixSuggestions, 
SolrRequestParserTest, CollectionsAPIAsyncDistributedZkTest, 
SimplePostToolTest, TriLevelCompositeIdRoutingTest, TestUpdate, 
DeleteLastCustomShardedReplicaTest, HighlighterMaxOffsetTest, TestCSVLoader, 
TestBadConfig, TestCollationFieldDocValues, 
DistributedQueryComponentCustomSortTest, UpdateRequestProcessorFactoryTest, 
ExternalCollectionsTest, DistributedDebugComponentTest, 
SignatureUpdateProcessorFactoryTest, TestTolerantSearch, PeerSyncTest, 
TestPostingsSolrHighlighter, TestFaceting, TestReRankQParserPlugin, 
TestSolrQueryParserResource, SolrCloudExampleTest, 
UUIDUpdateProcessorFallbackTest, TestShortCircuitedRequests, 
CurrencyFieldXmlFileTest, TestDynamicLoading, BasicZkTest, 
UniqFieldsUpdateProcessorFactoryTest, DocExpirationUpdateProcessorFactoryTest, 
TestOverriddenPrefixQueryForCustomFieldType, 
TestManagedSchemaFieldTypeResource, PreAnalyzedUpdateProcessorTest, 
TestSerializedLuceneMatchVersion, TestSolrConfigHandlerCloud, SolrCoreTest, 
BasicDistributedZkTest, SyncSliceTest, ExternalFileFieldSortTest, 
TestBinaryField, TestCodecSupport, HighlighterConfigTest, TestSolrIndexConfig, 
CSVRequestHandlerTest, TestSolr4Spatial2, TestFunctionQuery, 
TestManagedSchemaDynamicFieldResource, TestDocumentBuilder, ZkNodePropsTest, 
TestCryptoKeys, TestJoin, SuggesterFSTTest, TestRecoveryHdfs, ClusterStateTest, 
DistributedIntervalFacetingTest, TermsComponentTest, TestLazyCores, TestConfig, 
TestManagedSchema, RequestHandlersTest, BlockDirectoryTest, TestSchemaManager, 
TestUniqueKeyFieldResource, TestSolrQueryParserDefaultOperatorResource, 
PrimitiveFieldTypeTest, SpatialRPTFieldTypeTest, 
IgnoreCommitOptimizeUpdateProcessorFactoryTest, TestRandomMergePolicy, 
TestQuerySenderListener, TestDistributedSearch, TestRemoteStreaming, 
TimeZoneUtilsTest, TestWordDelimiterFilterFactory, 
TestTrackingShardHandlerFactory, TestReplicationHandler, TestRealTimeGet, 
DistributedTermsComponentTest, TestRangeQuery, TestCoreContainer, 
TestSolr4Spatial, SolrCmdDistributorTest, QueryElevationComponentTest, 
TestFiltering, TestIndexingPerformance, TestArbitraryIndexDir, 
RegexBoostProcessorTest, JSONWriterTest, QueryParsingTest, 
BinaryUpdateRequestHandlerTest, TestComponentsName, TestLFUCache, 
UpdateParamsTest, BadComponentTest, TestSolrDeletionPolicy2, MinimalSchemaTest, 
TestSolrCoreProperties, TestPhraseSuggestions, TestBM25SimilarityFactory, 
ScriptEngineTest, ChaosMonkeyNothingIsSafeTest]
   [junit4] Completed [379/484] on J0 in 44.89s, 1 test, 1 failure <<< FAILURES!

[...truncated 339 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:536: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:484: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:61: The following error 
occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/extra-targets.xml:39: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:229: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:511: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1434: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:991: 
There were test failures: 484 suites, 1946 tests, 1 failure, 54 ignored (25 
assumptions)

Total time: 52 minutes 13 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to