Hmmm ...

-beast:
  [beaster] Beast round: 1
  [beaster] Beast round: 2
  [beaster] Beast round: 3
  [beaster] Beast round: 4
  [beaster] Beast round: 5
  [beaster] Beast round: 6
  [beaster] Beast round: 7
  [beaster] Beast round: 8
  [beaster] Beast round: 9
  [beaster] Beast round: 10
  [beaster] Beast round: 11
  [beaster] Beast round: 12
  [beaster] Beast round: 13
  [beaster] Beast round: 14
  [beaster] Beast round: 15
  [beaster] Beast round: 16
  [beaster] Beast round: 17
  [beaster] Beast round: 18
  [beaster] Beast round: 19
  [beaster] Beast round: 20
  [beaster] Beasting finished.

On Thu, May 21, 2015 at 9:14 AM, Timothy Potter <thelabd...@gmail.com> wrote:
> I'm going to run the beast on this for a bit to see if I can reproduce ...
>
> On Thu, May 21, 2015 at 4:12 AM, Policeman Jenkins Server
> <jenk...@thetaphi.de> wrote:
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12765/
>> Java: 32bit/jdk1.8.0_60-ea-b12 -server -XX:+UseSerialGC
>>
>> 1 tests failed.
>> FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test
>>
>> Error Message:
>> document count mismatch.  control=358 sum(shards)=357 cloudClient=357
>>
>> Stack Trace:
>> java.lang.AssertionError: document count mismatch.  control=358 
>> sum(shards)=357 cloudClient=357
>>         at 
>> __randomizedtesting.SeedInfo.seed([C3A1DDED6178C6E2:4BF5E237CF84AB1A]:0)
>>         at org.junit.Assert.fail(Assert.java:93)
>>         at 
>> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1345)
>>         at 
>> org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:240)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:497)
>>         at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
>>         at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
>>         at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
>>         at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
>>         at 
>> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
>>         at 
>> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>>         at 
>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>>         at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>>         at 
>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>>         at 
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>>         at 
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>         at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
>>         at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
>>         at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
>>         at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
>>         at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
>>         at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
>>         at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>>         at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>         at 
>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>         at 
>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
>>         at 
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>>         at 
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>>         at 
>> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
>>         at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>         at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
>>         at java.lang.Thread.run(Thread.java:745)
>>
>>
>>
>>
>> Build Log:
>> [...truncated 10497 lines...]
>>    [junit4] Suite: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest
>>    [junit4]   2> Creating dataDir: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/init-core-data-001
>>    [junit4]   2> 882886 T5824 
>> oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
>> property: /mv/ls
>>    [junit4]   2> 882888 T5824 oasc.ZkTestServer.run STARTING ZK TEST SERVER
>>    [junit4]   2> 882888 T5825 oasc.ZkTestServer$2$1.setClientPort client 
>> port:0.0.0.0/0.0.0.0:0
>>    [junit4]   2> 882889 T5825 oasc.ZkTestServer$ZKServerMain.runFromConfig 
>> Starting server
>>    [junit4]   2> 882988 T5824 oasc.ZkTestServer.run start zk server on 
>> port:53715
>>    [junit4]   2> 882989 T5824 
>> oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
>> ZkCredentialsProvider
>>    [junit4]   2> 882990 T5824 oascc.ConnectionManager.waitForConnected 
>> Waiting for client to connect to ZooKeeper
>>    [junit4]   2> 882991 T5832 oascc.ConnectionManager.process Watcher 
>> org.apache.solr.common.cloud.ConnectionManager@fed808 
>> name:ZooKeeperConnection Watcher:127.0.0.1:53715 got event WatchedEvent 
>> state:SyncConnected type:None path:null path:null type:None
>>    [junit4]   2> 882991 T5824 oascc.ConnectionManager.waitForConnected 
>> Client is connected to ZooKeeper
>>    [junit4]   2> 882992 T5824 oascc.SolrZkClient.createZkACLProvider Using 
>> default ZkACLProvider
>>    [junit4]   2> 882992 T5824 oascc.SolrZkClient.makePath makePath: /solr
>>    [junit4]   2> 882993 T5824 
>> oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
>> ZkCredentialsProvider
>>    [junit4]   2> 882994 T5824 oascc.ConnectionManager.waitForConnected 
>> Waiting for client to connect to ZooKeeper
>>    [junit4]   2> 882994 T5835 oascc.ConnectionManager.process Watcher 
>> org.apache.solr.common.cloud.ConnectionManager@15330e 
>> name:ZooKeeperConnection Watcher:127.0.0.1:53715/solr got event WatchedEvent 
>> state:SyncConnected type:None path:null path:null type:None
>>    [junit4]   2> 882995 T5824 oascc.ConnectionManager.waitForConnected 
>> Client is connected to ZooKeeper
>>    [junit4]   2> 882995 T5824 oascc.SolrZkClient.createZkACLProvider Using 
>> default ZkACLProvider
>>    [junit4]   2> 882995 T5824 oascc.SolrZkClient.makePath makePath: 
>> /collections/collection1
>>    [junit4]   2> 882996 T5824 oascc.SolrZkClient.makePath makePath: 
>> /collections/collection1/shards
>>    [junit4]   2> 882997 T5824 oascc.SolrZkClient.makePath makePath: 
>> /collections/control_collection
>>    [junit4]   2> 882997 T5824 oascc.SolrZkClient.makePath makePath: 
>> /collections/control_collection/shards
>>    [junit4]   2> 882998 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
>>  to /configs/conf1/solrconfig.xml
>>    [junit4]   2> 882998 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/solrconfig.xml
>>    [junit4]   2> 882999 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/schema15.xml
>>  to /configs/conf1/schema.xml
>>    [junit4]   2> 883000 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/schema.xml
>>    [junit4]   2> 883001 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
>>  to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
>>    [junit4]   2> 883001 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/solrconfig.snippet.randomindexconfig.xml
>>    [junit4]   2> 883002 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
>>  to /configs/conf1/stopwords.txt
>>    [junit4]   2> 883002 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/stopwords.txt
>>    [junit4]   2> 883003 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/protwords.txt
>>  to /configs/conf1/protwords.txt
>>    [junit4]   2> 883003 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/protwords.txt
>>    [junit4]   2> 883004 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/currency.xml
>>  to /configs/conf1/currency.xml
>>    [junit4]   2> 883005 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/currency.xml
>>    [junit4]   2> 883006 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
>>  to /configs/conf1/enumsConfig.xml
>>    [junit4]   2> 883006 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/enumsConfig.xml
>>    [junit4]   2> 883007 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
>>  to /configs/conf1/open-exchange-rates.json
>>    [junit4]   2> 883007 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/open-exchange-rates.json
>>    [junit4]   2> 883008 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
>>  to /configs/conf1/mapping-ISOLatin1Accent.txt
>>    [junit4]   2> 883008 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/mapping-ISOLatin1Accent.txt
>>    [junit4]   2> 883009 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
>>  to /configs/conf1/old_synonyms.txt
>>    [junit4]   2> 883010 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/old_synonyms.txt
>>    [junit4]   2> 883010 T5824 oasc.AbstractZkTestCase.putConfig put 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
>>  to /configs/conf1/synonyms.txt
>>    [junit4]   2> 883011 T5824 oascc.SolrZkClient.makePath makePath: 
>> /configs/conf1/synonyms.txt
>>    [junit4]   2> 883063 T5824 oas.SolrTestCaseJ4.writeCoreProperties Writing 
>> core.properties file to 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1
>>    [junit4]   2> 883065 T5824 oejs.Server.doStart jetty-9.2.10.v20150310
>>    [junit4]   2> 883066 T5824 oejsh.ContextHandler.doStart Started 
>> o.e.j.s.ServletContextHandler@67956f{/mv/ls,null,AVAILABLE}
>>    [junit4]   2> 883066 T5824 oejs.AbstractConnector.doStart Started 
>> ServerConnector@1b82ff9{HTTP/1.1}{127.0.0.1:36633}
>>    [junit4]   2> 883066 T5824 oejs.Server.doStart Started @884025ms
>>    [junit4]   2> 883067 T5824 oascse.JettySolrRunner$1.lifeCycleStarted 
>> Jetty properties: 
>> {solr.data.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/tempDir-001/control/data, hostContext=/mv/ls, 
>> hostPort=36633, 
>> coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores}
>>    [junit4]   2> 883067 T5824 oass.SolrDispatchFilter.init 
>> SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@e2f2a
>>    [junit4]   2> 883067 T5824 oasc.SolrResourceLoader.<init> new 
>> SolrResourceLoader for directory: 
>> '/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/'
>>    [junit4]   2> 883076 T5824 oasc.SolrXmlConfig.fromFile Loading container 
>> configuration from 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/solr.xml
>>    [junit4]   2> 883080 T5824 oasc.CorePropertiesLocator.<init> 
>> Config-defined core root directory: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores
>>    [junit4]   2> 883080 T5824 oasc.CoreContainer.<init> New CoreContainer 
>> 11143081
>>    [junit4]   2> 883081 T5824 oasc.CoreContainer.load Loading cores into 
>> CoreContainer 
>> [instanceDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/]
>>    [junit4]   2> 883081 T5824 oasc.CoreContainer.load loading shared 
>> library: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/lib
>>    [junit4]   2> 883081 T5824 oasc.SolrResourceLoader.addToClassLoader WARN 
>> Can't find (or read) directory to add to classloader: lib (resolved as: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/lib).
>>    [junit4]   2> 883086 T5824 oashc.HttpShardHandlerFactory.init created 
>> with socketTimeout : 90000,urlScheme : ,connTimeout : 
>> 15000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 
>> 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : 
>> -1,fairnessPolicy : false,useRetries : false,
>>    [junit4]   2> 883087 T5824 oasu.UpdateShardHandler.<init> Creating 
>> UpdateShardHandler HTTP client with params: 
>> socketTimeout=340000&connTimeout=45000&retry=true
>>    [junit4]   2> 883088 T5824 oasl.LogWatcher.createWatcher SLF4J impl is 
>> org.slf4j.impl.Log4jLoggerFactory
>>    [junit4]   2> 883088 T5824 oasl.LogWatcher.newRegisteredLogWatcher 
>> Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
>>    [junit4]   2> 883088 T5824 oasc.CoreContainer.load Node Name: 127.0.0.1
>>    [junit4]   2> 883088 T5824 oasc.ZkContainer.initZooKeeper Zookeeper 
>> client=127.0.0.1:53715/solr
>>    [junit4]   2> 883089 T5824 oasc.ZkController.checkChrootPath zkHost 
>> includes chroot
>>    [junit4]   2> 883089 T5824 
>> oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
>> ZkCredentialsProvider
>>    [junit4]   2> 883090 T5824 oascc.ConnectionManager.waitForConnected 
>> Waiting for client to connect to ZooKeeper
>>    [junit4]   2> 883091 T5849 oascc.ConnectionManager.process Watcher 
>> org.apache.solr.common.cloud.ConnectionManager@1e519ca 
>> name:ZooKeeperConnection Watcher:127.0.0.1:53715 got event WatchedEvent 
>> state:SyncConnected type:None path:null path:null type:None
>>    [junit4]   2> 883091 T5824 oascc.ConnectionManager.waitForConnected 
>> Client is connected to ZooKeeper
>>    [junit4]   2> 883091 T5824 oascc.SolrZkClient.createZkACLProvider Using 
>> default ZkACLProvider
>>    [junit4]   2> 883092 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ConnectionManager.waitForConnected Waiting for client to connect to 
>> ZooKeeper
>>    [junit4]   2> 883093 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ConnectionManager.process Watcher 
>> org.apache.solr.common.cloud.ConnectionManager@c92640 
>> name:ZooKeeperConnection Watcher:127.0.0.1:53715/solr got event WatchedEvent 
>> state:SyncConnected type:None path:null path:null type:None
>>    [junit4]   2> 883093 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
>>    [junit4]   2> 883094 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /overseer/queue
>>    [junit4]   2> 883095 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /overseer/collection-queue-work
>>    [junit4]   2> 883096 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /overseer/collection-map-running
>>    [junit4]   2> 883097 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /overseer/collection-map-completed
>>    [junit4]   2> 883098 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /overseer/collection-map-failure
>>    [junit4]   2> 883100 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /live_nodes
>>    [junit4]   2> 883101 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /aliases.json
>>    [junit4]   2> 883102 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /clusterstate.json
>>    [junit4]   2> 883103 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oasc.ZkController.createEphemeralLiveNode Register node as live in 
>> ZooKeeper:/live_nodes/127.0.0.1:36633_mv%2Fls
>>    [junit4]   2> 883103 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /live_nodes/127.0.0.1:36633_mv%2Fls
>>    [junit4]   2> 883104 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /overseer_elect
>>    [junit4]   2> 883105 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /overseer_elect/election
>>    [junit4]   2> 883106 T5824 n:127.0.0.1:36633_mv%2Fls oasc.Overseer.close 
>> Overseer (id=null) closing
>>    [junit4]   2> 883108 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oasc.OverseerElectionContext.runLeaderProcess I am going to be the leader 
>> 127.0.0.1:36633_mv%2Fls
>>    [junit4]   2> 883108 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /overseer_elect/leader
>>    [junit4]   2> 883109 T5824 n:127.0.0.1:36633_mv%2Fls oasc.Overseer.start 
>> Overseer (id=93860829923311619-127.0.0.1:36633_mv%2Fls-n_0000000000) starting
>>    [junit4]   2> 883110 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /overseer/queue-work
>>    [junit4]   2> 883114 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oasc.OverseerAutoReplicaFailoverThread.<init> Starting 
>> OverseerAutoReplicaFailoverThread autoReplicaFailoverWorkLoopDelay=10000 
>> autoReplicaFailoverWaitAfterExpiration=30000 
>> autoReplicaFailoverBadNodeExpiration=60000
>>    [junit4]   2> 883115 T5854 n:127.0.0.1:36633_mv%2Fls 
>> oasc.OverseerCollectionProcessor.run Process current queue of collection 
>> creations
>>    [junit4]   2> 883115 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run Starting to work on the main queue
>>    [junit4]   2> 883115 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster 
>> state from ZooKeeper...
>>    [junit4]   2> 883117 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oasc.CoreContainer.initializeAuthenticationPlugin No authentication plugin 
>> used.
>>    [junit4]   2> 883118 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oasc.CoreContainer.intializeAuthorizationPlugin Security conf doesn't exist. 
>> Skipping setup for authorization module.
>>    [junit4]   2> 883119 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores
>>    [junit4]   2> 883120 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
>> config=solrconfig.xml, transient=false, schema=schema.xml, 
>> loadOnStartup=true, 
>> instanceDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1, 
>> collection=control_collection, 
>> absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1/, coreNodeName=, 
>> dataDir=data/, shard=}
>>    [junit4]   2> 883120 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1/
>>    [junit4]   2> 883120 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oasc.CorePropertiesLocator.discover Found 1 core definitions
>>    [junit4]   2> 883121 T5856 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> x:collection1 oasc.ZkController.publish publishing core=collection1 
>> state=down collection=control_collection
>>    [junit4]   2> 883122 T5856 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> x:collection1 oasc.ZkController.publish numShards not found on descriptor - 
>> reading it from system property
>>    [junit4]   2> 883122 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
>> /overseer/queue state SyncConnected
>>    [junit4]   2> 883123 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oasc.ZkController.waitForCoreNodeName look for our core node name
>>    [junit4]   2> 883124 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message 
>> = {
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:36633/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:36633_mv%2Fls",
>>    [junit4]   2>          "numShards":"1",
>>    [junit4]   2>          "state":"down",
>>    [junit4]   2>          "shard":null,
>>    [junit4]   2>          "collection":"control_collection",
>>    [junit4]   2>          "operation":"state"} current state version: 0
>>    [junit4]   2> 883125 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ReplicaMutator.updateState Update state numShards=1 message={
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:36633/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:36633_mv%2Fls",
>>    [junit4]   2>          "numShards":"1",
>>    [junit4]   2>          "state":"down",
>>    [junit4]   2>          "shard":null,
>>    [junit4]   2>          "collection":"control_collection",
>>    [junit4]   2>          "operation":"state"}
>>    [junit4]   2> 883125 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ClusterStateMutator.createCollection building a new cName: 
>> control_collection
>>    [junit4]   2> 883126 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
>>    [junit4]   2> 883126 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 1)
>>    [junit4]   2> 884123 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oasc.ZkController.waitForShardId waiting to find shard id in clusterstate 
>> for collection1
>>    [junit4]   2> 884123 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oasc.ZkController.createCollectionZkNode Check for collection 
>> zkNode:control_collection
>>    [junit4]   2> 884124 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oasc.ZkController.createCollectionZkNode Collection zkNode exists
>>    [junit4]   2> 884124 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader.readConfigName Load collection config 
>> from:/collections/control_collection
>>    [junit4]   2> 884125 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader.readConfigName path=/collections/control_collection 
>> configName=conf1 specified config exists in ZooKeeper
>>    [junit4]   2> 884125 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: 
>> '/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1/'
>>    [junit4]   2> 884135 T5856 n:127.0.0.1:36633_mv%2Fls oasc.Config.<init> 
>> loaded config solrconfig.xml with version 0
>>    [junit4]   2> 884139 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
>>    [junit4]   2> 884143 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oasc.SolrConfig.<init> Using Lucene MatchVersion: 6.0.0
>>    [junit4]   2> 884152 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
>>    [junit4]   2> 884153 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oass.IndexSchema.readSchema Reading Solr Schema from 
>> /configs/conf1/schema.xml
>>    [junit4]   2> 884158 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oass.IndexSchema.readSchema [collection1] Schema name=test
>>    [junit4]   2> 884262 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oass.IndexSchema.readSchema default search field in schema is text
>>    [junit4]   2> 884264 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oass.IndexSchema.readSchema unique key field: id
>>    [junit4]   2> 884265 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
>> currency.xml
>>    [junit4]   2> 884267 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
>> currency.xml
>>    [junit4]   2> 884277 T5856 n:127.0.0.1:36633_mv%2Fls 
>> oasc.CoreContainer.create Creating SolrCore 'collection1' using 
>> configuration from collection control_collection
>>    [junit4]   2> 884278 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrCore.initDirectoryFactory solr.StandardDirectoryFactory
>>    [junit4]   2> 884278 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at 
>> [/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1/], dataDir=[null]
>>    [junit4]   2> 884278 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to 
>> JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@4ee1e2
>>    [junit4]   2> 884279 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.CachingDirectoryFactory.get return new directory for 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1/data
>>    [junit4]   2> 884279 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrCore.getNewIndexDir New index directory detected: old=null 
>> new=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1/data/index/
>>    [junit4]   2> 884280 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrCore.initIndex WARN [collection1] Solr index directory 
>> '/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1/data/index' doesn't 
>> exist. Creating new index...
>>    [junit4]   2> 884280 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.CachingDirectoryFactory.get return new directory for 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/control-001/cores/collection1/data/index
>>    [junit4]   2> 884281 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
>> org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
>> maxMergeAtOnce=50, maxMergeAtOnceExplicit=35, 
>> maxMergedSegmentMB=15.5849609375, floorSegmentMB=1.583984375, 
>> forceMergeDeletesPctAllowed=11.317874839400897, segmentsPerTier=13.0, 
>> maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0
>>    [junit4]   2> 884303 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
>>    [junit4]   2>                
>> commit{dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  
>> C3A1DDED6178C6E2-001/control-001/cores/collection1/data/index,segFN=segments_1,generation=1}
>>    [junit4]   2> 884303 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
>>    [junit4]   2> 884307 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "nodistrib"
>>    [junit4]   2> 884307 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "dedupe"
>>    [junit4]   2> 884307 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init inserting 
>> DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
>>    [junit4]   2> 884308 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "stored_sig"
>>    [junit4]   2> 884308 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init inserting 
>> DistributedUpdateProcessorFactory into updateRequestProcessorChain 
>> "stored_sig"
>>    [junit4]   2> 884308 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "distrib-dup-test-chain-explicit"
>>    [junit4]   2> 884308 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "distrib-dup-test-chain-implicit"
>>    [junit4]   2> 884309 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init inserting 
>> DistributedUpdateProcessorFactory into updateRequestProcessorChain 
>> "distrib-dup-test-chain-implicit"
>>    [junit4]   2> 884309 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain 
>> defined as default, creating implicit default
>>    [junit4]   2> 884311 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
>>    [junit4]   2> 884312 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
>>    [junit4]   2> 884313 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
>>    [junit4]   2> 884314 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
>>    [junit4]   2> 884317 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.RequestHandlers.initHandlersFromConfig Registered paths: 
>> /admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
>>    [junit4]   2> 884317 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrCore.initStatsCache Using default statsCache cache: 
>> org.apache.solr.search.stats.LocalStatsCache
>>    [junit4]   2> 884318 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.UpdateHandler.<init> Using UpdateLog implementation: 
>> org.apache.solr.update.UpdateLog
>>    [junit4]   2> 884318 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
>> numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=256
>>    [junit4]   2> 884318 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.CommitTracker.<init> Hard AutoCommit: disabled
>>    [junit4]   2> 884319 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.CommitTracker.<init> Soft AutoCommit: disabled
>>    [junit4]   2> 884319 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
>> org.apache.lucene.index.AlcoholicMergePolicy: [AlcoholicMergePolicy: 
>> minMergeSize=0, mergeFactor=10, maxMergeSize=23824830, 
>> maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
>> maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
>> noCFSRatio=0.1]
>>    [junit4]   2> 884320 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
>>    [junit4]   2>                
>> commit{dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  
>> C3A1DDED6178C6E2-001/control-001/cores/collection1/data/index,segFN=segments_1,generation=1}
>>    [junit4]   2> 884320 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
>>    [junit4]   2> 884321 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oass.SolrIndexSearcher.<init> Opening Searcher@628bff[collection1] main
>>    [junit4]   2> 884321 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.UpdateLog.onFirstSearcher On first searcher opened, looking up max 
>> value of version field
>>    [junit4]   2> 884321 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.VersionInfo.getMaxVersionFromIndex Refreshing highest value of 
>> _version_ for 256 version buckets from index
>>    [junit4]   2> 884321 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.VersionInfo.getMaxVersionFromIndex WARN No terms found for _version_, 
>> cannot seed version bucket highest value from index
>>    [junit4]   2> 884322 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.UpdateLog.seedBucketsWithHighestVersion WARN Could not find max version 
>> in index or recent updates, using new clock 1501773280274546688
>>    [junit4]   2> 884322 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasu.UpdateLog.seedBucketsWithHighestVersion Took 1 ms to seed version 
>> buckets with highest version 1501773280274546688
>>    [junit4]   2> 884322 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oascc.ZkStateReader.readConfigName Load collection config 
>> from:/collections/control_collection
>>    [junit4]   2> 884323 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oascc.ZkStateReader.readConfigName path=/collections/control_collection 
>> configName=conf1 specified config exists in ZooKeeper
>>    [junit4]   2> 884323 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage 
>> for the RestManager with znodeBase: /configs/conf1
>>    [junit4]   2> 884323 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured 
>> ZooKeeperStorageIO with znodeBase: /configs/conf1
>>    [junit4]   2> 884323 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasr.RestManager.init Initializing RestManager with initArgs: {}
>>    [junit4]   2> 884323 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage.load Reading _rest_managed.json using 
>> ZooKeeperStorageIO:path=/configs/conf1
>>    [junit4]   2> 884324 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found 
>> for znode /configs/conf1/_rest_managed.json
>>    [junit4]   2> 884324 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json 
>> using ZooKeeperStorageIO:path=/configs/conf1
>>    [junit4]   2> 884324 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasr.RestManager.init Initializing 0 registered ManagedResources
>>    [junit4]   2> 884325 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oash.ReplicationHandler.inform Commits will be reserved for  10000
>>    [junit4]   2> 884325 T5857 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.SolrCore.registerSearcher [collection1] Registered new searcher 
>> Searcher@628bff[collection1] 
>> main{ExitableDirectoryReader(UninvertingDirectoryReader())}
>>    [junit4]   2> 884326 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.ZkController.getConfDirListeners watch zkdir /configs/conf1
>>    [junit4]   2> 884326 T5856 n:127.0.0.1:36633_mv%2Fls x:collection1 
>> oasc.CoreContainer.registerCore registering core: collection1
>>    [junit4]   2> 884327 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oasc.ZkController.register Register replica - 
>> core:collection1 address:http://127.0.0.1:36633/mv/ls 
>> collection:control_collection shard:shard1
>>    [junit4]   2> 884327 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oass.SolrDispatchFilter.init 
>> user.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1
>>    [junit4]   2> 884327 T5824 n:127.0.0.1:36633_mv%2Fls 
>> oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
>>    [junit4]   2> 884328 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oascc.SolrZkClient.makePath makePath: 
>> /collections/control_collection/leader_elect/shard1/election
>>    [junit4]   2> 884329 T5824 
>> oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
>> ZkCredentialsProvider
>>    [junit4]   2> 884329 T5824 oascc.ConnectionManager.waitForConnected 
>> Waiting for client to connect to ZooKeeper
>>    [junit4]   2> 884333 T5863 oascc.ConnectionManager.process Watcher 
>> org.apache.solr.common.cloud.ConnectionManager@afdab3 
>> name:ZooKeeperConnection Watcher:127.0.0.1:53715/solr got event WatchedEvent 
>> state:SyncConnected type:None path:null path:null type:None
>>    [junit4]   2> 884333 T5824 oascc.ConnectionManager.waitForConnected 
>> Client is connected to ZooKeeper
>>    [junit4]   2> 884334 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess 
>> Running the leader process for shard shard1
>>    [junit4]   2> 884334 T5824 oascc.SolrZkClient.createZkACLProvider Using 
>> default ZkACLProvider
>>    [junit4]   2> 884334 T5824 
>> oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster 
>> state from ZooKeeper...
>>    [junit4]   2> 884335 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
>> /overseer/queue state SyncConnected
>>    [junit4]   2> 884335 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 
>> oasc.ShardLeaderElectionContext.waitForReplicasToComeUp Enough replicas 
>> found to continue.
>>    [junit4]   2> 884335 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message 
>> = {
>>    [junit4]   2>          "operation":"leader",
>>    [junit4]   2>          "shard":"shard1",
>>    [junit4]   2>          "collection":"control_collection"} current state 
>> version: 1
>>    [junit4]   2> 884335 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I 
>> may be the new leader - try and sync
>>    [junit4]   2> ASYNC  NEW_CORE C1660 name=collection1 
>> org.apache.solr.core.SolrCore@919719 
>> url=http://127.0.0.1:36633/mv/ls/collection1 node=127.0.0.1:36633_mv%2Fls 
>> C1660_STATE=coll:control_collection core:collection1 
>> props:{core=collection1, base_url=http://127.0.0.1:36633/mv/ls, 
>> node_name=127.0.0.1:36633_mv%2Fls, state=down}
>>    [junit4]   2> 884336 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 C1660 oasc.SyncStrategy.sync Sync replicas to 
>> http://127.0.0.1:36633/mv/ls/collection1/
>>    [junit4]   2> 884336 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 C1660 oasc.SyncStrategy.syncReplicas Sync Success - 
>> now sync replicas to me
>>    [junit4]   2> 884336 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 C1660 oasc.SyncStrategy.syncToMe 
>> http://127.0.0.1:36633/mv/ls/collection1/ has no replicas
>>    [junit4]   2> 884336 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am 
>> the new leader: http://127.0.0.1:36633/mv/ls/collection1/ shard1
>>    [junit4]   2> 884337 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oascc.SolrZkClient.makePath makePath: 
>> /collections/control_collection/leaders/shard1
>>    [junit4]   2> 884337 T5824 oasc.ChaosMonkey.monkeyLog monkey: init - 
>> expire sessions:false cause connection loss:false
>>    [junit4]   2> 884347 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
>> /overseer/queue state SyncConnected
>>    [junit4]   2> 884352 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message 
>> = {
>>    [junit4]   2>          "operation":"leader",
>>    [junit4]   2>          "shard":"shard1",
>>    [junit4]   2>          "collection":"control_collection",
>>    [junit4]   2>          "base_url":"http://127.0.0.1:36633/mv/ls";,
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "state":"active"} current state version: 1
>>    [junit4]   2> 884404 T5824 oas.SolrTestCaseJ4.writeCoreProperties Writing 
>> core.properties file to 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1
>>    [junit4]   2> 884405 T5824 
>> oasc.AbstractFullDistribZkTestBase.createJettys create jetty 1 in directory 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001
>>    [junit4]   2> 884405 T5824 oejs.Server.doStart jetty-9.2.10.v20150310
>>    [junit4]   2> 884406 T5824 oejsh.ContextHandler.doStart Started 
>> o.e.j.s.ServletContextHandler@32c2b0{/mv/ls,null,AVAILABLE}
>>    [junit4]   2> 884407 T5824 oejs.AbstractConnector.doStart Started 
>> ServerConnector@3d92fc{HTTP/1.1}{127.0.0.1:34877}
>>    [junit4]   2> 884407 T5824 oejs.Server.doStart Started @885365ms
>>    [junit4]   2> 884407 T5824 oascse.JettySolrRunner$1.lifeCycleStarted 
>> Jetty properties: 
>> {solr.data.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/tempDir-001/jetty1, solrconfig=solrconfig.xml, 
>> hostContext=/mv/ls, hostPort=34877, 
>> coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores}
>>    [junit4]   2> 884408 T5824 oass.SolrDispatchFilter.init 
>> SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@e2f2a
>>    [junit4]   2> 884408 T5824 oasc.SolrResourceLoader.<init> new 
>> SolrResourceLoader for directory: 
>> '/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/'
>>    [junit4]   2> 884419 T5824 oasc.SolrXmlConfig.fromFile Loading container 
>> configuration from 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/solr.xml
>>    [junit4]   2> 884422 T5824 oasc.CorePropertiesLocator.<init> 
>> Config-defined core root directory: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores
>>    [junit4]   2> 884423 T5824 oasc.CoreContainer.<init> New CoreContainer 
>> 14834921
>>    [junit4]   2> 884423 T5824 oasc.CoreContainer.load Loading cores into 
>> CoreContainer 
>> [instanceDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/]
>>    [junit4]   2> 884424 T5824 oasc.CoreContainer.load loading shared 
>> library: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/lib
>>    [junit4]   2> 884424 T5824 oasc.SolrResourceLoader.addToClassLoader WARN 
>> Can't find (or read) directory to add to classloader: lib (resolved as: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/lib).
>>    [junit4]   2> 884428 T5824 oashc.HttpShardHandlerFactory.init created 
>> with socketTimeout : 90000,urlScheme : ,connTimeout : 
>> 15000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 
>> 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : 
>> -1,fairnessPolicy : false,useRetries : false,
>>    [junit4]   2> 884430 T5824 oasu.UpdateShardHandler.<init> Creating 
>> UpdateShardHandler HTTP client with params: 
>> socketTimeout=340000&connTimeout=45000&retry=true
>>    [junit4]   2> 884430 T5824 oasl.LogWatcher.createWatcher SLF4J impl is 
>> org.slf4j.impl.Log4jLoggerFactory
>>    [junit4]   2> 884430 T5824 oasl.LogWatcher.newRegisteredLogWatcher 
>> Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
>>    [junit4]   2> 884431 T5824 oasc.CoreContainer.load Node Name: 127.0.0.1
>>    [junit4]   2> 884431 T5824 oasc.ZkContainer.initZooKeeper Zookeeper 
>> client=127.0.0.1:53715/solr
>>    [junit4]   2> 884431 T5824 oasc.ZkController.checkChrootPath zkHost 
>> includes chroot
>>    [junit4]   2> 884431 T5824 
>> oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
>> ZkCredentialsProvider
>>    [junit4]   2> 884432 T5824 oascc.ConnectionManager.waitForConnected 
>> Waiting for client to connect to ZooKeeper
>>    [junit4]   2> 884433 T5877 oascc.ConnectionManager.process Watcher 
>> org.apache.solr.common.cloud.ConnectionManager@4b6185 
>> name:ZooKeeperConnection Watcher:127.0.0.1:53715 got event WatchedEvent 
>> state:SyncConnected type:None path:null path:null type:None
>>    [junit4]   2> 884433 T5824 oascc.ConnectionManager.waitForConnected 
>> Client is connected to ZooKeeper
>>    [junit4]   2> 884434 T5824 oascc.SolrZkClient.createZkACLProvider Using 
>> default ZkACLProvider
>>    [junit4]   2> 884435 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ConnectionManager.waitForConnected Waiting for client to connect to 
>> ZooKeeper
>>    [junit4]   2> 884436 T5880 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ConnectionManager.process Watcher 
>> org.apache.solr.common.cloud.ConnectionManager@10b1970 
>> name:ZooKeeperConnection Watcher:127.0.0.1:53715/solr got event WatchedEvent 
>> state:SyncConnected type:None path:null path:null type:None
>>    [junit4]   2> 884436 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
>>    [junit4]   2> 884439 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster 
>> state from ZooKeeper...
>>    [junit4]   2> 884454 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 1)
>>    [junit4]   2> 884454 T5880 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 1)
>>    [junit4]   2> 884460 T5863 oascc.ZkStateReader$2.process A cluster state 
>> change: WatchedEvent state:SyncConnected type:NodeDataChanged 
>> path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
>>    [junit4]   2> 884503 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oasc.ZkController.register We are 
>> http://127.0.0.1:36633/mv/ls/collection1/ and leader is 
>> http://127.0.0.1:36633/mv/ls/collection1/
>>    [junit4]   2> 884503 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oasc.ZkController.register No LogReplay needed for 
>> core=collection1 baseURL=http://127.0.0.1:36633/mv/ls
>>    [junit4]   2> 884503 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oasc.ZkController.checkRecovery I am the leader, no 
>> recovery necessary
>>    [junit4]   2> 884504 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oasc.ZkController.publish publishing core=collection1 
>> state=active collection=control_collection
>>    [junit4]   2> 884504 T5860 n:127.0.0.1:36633_mv%2Fls c:control_collection 
>> s:shard1 x:collection1 oasc.ZkController.publish numShards not found on 
>> descriptor - reading it from system property
>>    [junit4]   2> 884505 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
>> /overseer/queue state SyncConnected
>>    [junit4]   2> 884509 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message 
>> = {
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "core_node_name":"core_node1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:36633/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:36633_mv%2Fls",
>>    [junit4]   2>          "numShards":"2",
>>    [junit4]   2>          "state":"active",
>>    [junit4]   2>          "shard":"shard1",
>>    [junit4]   2>          "collection":"control_collection",
>>    [junit4]   2>          "operation":"state"} current state version: 2
>>    [junit4]   2> 884510 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ReplicaMutator.updateState Update state numShards=2 message={
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "core_node_name":"core_node1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:36633/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:36633_mv%2Fls",
>>    [junit4]   2>          "numShards":"2",
>>    [junit4]   2>          "state":"active",
>>    [junit4]   2>          "shard":"shard1",
>>    [junit4]   2>          "collection":"control_collection",
>>    [junit4]   2>          "operation":"state"}
>>    [junit4]   2> 884612 T5863 oascc.ZkStateReader$2.process A cluster state 
>> change: WatchedEvent state:SyncConnected type:NodeDataChanged 
>> path:/clusterstate.json, has occurred - updating... (live nodes size: 1)
>>    [junit4]   2> 884612 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 1)
>>    [junit4]   2> 884612 T5880 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 1)
>>    [junit4]   2> 885441 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oasc.ZkController.createEphemeralLiveNode Register node as live in 
>> ZooKeeper:/live_nodes/127.0.0.1:34877_mv%2Fls
>>    [junit4]   2> 885442 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /live_nodes/127.0.0.1:34877_mv%2Fls
>>    [junit4]   2> 885443 T5824 n:127.0.0.1:34877_mv%2Fls oasc.Overseer.close 
>> Overseer (id=null) closing
>>    [junit4]   2> 885444 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oasc.CoreContainer.initializeAuthenticationPlugin No authentication plugin 
>> used.
>>    [junit4]   2> 885445 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oasc.CoreContainer.intializeAuthorizationPlugin Security conf doesn't exist. 
>> Skipping setup for authorization module.
>>    [junit4]   2> 885445 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores
>>    [junit4]   2> 885446 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
>> config=solrconfig.xml, transient=false, schema=schema.xml, 
>> loadOnStartup=true, 
>> instanceDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1, collection=collection1, 
>> absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/, coreNodeName=, 
>> dataDir=data/, shard=}
>>    [junit4]   2> 885447 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/
>>    [junit4]   2> 885447 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oasc.CorePropertiesLocator.discover Found 1 core definitions
>>    [junit4]   2> 885448 T5881 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> x:collection1 oasc.ZkController.publish publishing core=collection1 
>> state=down collection=collection1
>>    [junit4]   2> 885448 T5881 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> x:collection1 oasc.ZkController.publish numShards not found on descriptor - 
>> reading it from system property
>>    [junit4]   2> 885448 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
>> /overseer/queue state SyncConnected
>>    [junit4]   2> 885448 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oasc.ZkController.waitForCoreNodeName look for our core node name
>>    [junit4]   2> 885449 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message 
>> = {
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:34877/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:34877_mv%2Fls",
>>    [junit4]   2>          "numShards":"2",
>>    [junit4]   2>          "state":"down",
>>    [junit4]   2>          "shard":null,
>>    [junit4]   2>          "collection":"collection1",
>>    [junit4]   2>          "operation":"state"} current state version: 3
>>    [junit4]   2> 885449 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ReplicaMutator.updateState Update state numShards=2 message={
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:34877/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:34877_mv%2Fls",
>>    [junit4]   2>          "numShards":"2",
>>    [junit4]   2>          "state":"down",
>>    [junit4]   2>          "shard":null,
>>    [junit4]   2>          "collection":"collection1",
>>    [junit4]   2>          "operation":"state"}
>>    [junit4]   2> 885449 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ClusterStateMutator.createCollection building a new cName: collection1
>>    [junit4]   2> 885450 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard2
>>    [junit4]   2> 885551 T5863 oascc.ZkStateReader$2.process A cluster state 
>> change: WatchedEvent state:SyncConnected type:NodeDataChanged 
>> path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 885551 T5880 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 885551 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 886449 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oasc.ZkController.waitForShardId waiting to find shard id in clusterstate 
>> for collection1
>>    [junit4]   2> 886449 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oasc.ZkController.createCollectionZkNode Check for collection 
>> zkNode:collection1
>>    [junit4]   2> 886450 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oasc.ZkController.createCollectionZkNode Collection zkNode exists
>>    [junit4]   2> 886450 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ZkStateReader.readConfigName Load collection config 
>> from:/collections/collection1
>>    [junit4]   2> 886450 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ZkStateReader.readConfigName path=/collections/collection1 
>> configName=conf1 specified config exists in ZooKeeper
>>    [junit4]   2> 886451 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: 
>> '/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/'
>>    [junit4]   2> 886461 T5881 n:127.0.0.1:34877_mv%2Fls oasc.Config.<init> 
>> loaded config solrconfig.xml with version 0
>>    [junit4]   2> 886465 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
>>    [junit4]   2> 886481 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oasc.SolrConfig.<init> Using Lucene MatchVersion: 6.0.0
>>    [junit4]   2> 886488 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
>>    [junit4]   2> 886489 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oass.IndexSchema.readSchema Reading Solr Schema from 
>> /configs/conf1/schema.xml
>>    [junit4]   2> 886492 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oass.IndexSchema.readSchema [collection1] Schema name=test
>>    [junit4]   2> 886568 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oass.IndexSchema.readSchema default search field in schema is text
>>    [junit4]   2> 886569 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oass.IndexSchema.readSchema unique key field: id
>>    [junit4]   2> 886570 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
>> currency.xml
>>    [junit4]   2> 886571 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
>> currency.xml
>>    [junit4]   2> 886582 T5881 n:127.0.0.1:34877_mv%2Fls 
>> oasc.CoreContainer.create Creating SolrCore 'collection1' using 
>> configuration from collection collection1
>>    [junit4]   2> 886582 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrCore.initDirectoryFactory solr.StandardDirectoryFactory
>>    [junit4]   2> 886582 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at 
>> [/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/], dataDir=[null]
>>    [junit4]   2> 886583 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to 
>> JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@4ee1e2
>>    [junit4]   2> 886583 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.CachingDirectoryFactory.get return new directory for 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/data
>>    [junit4]   2> 886583 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrCore.getNewIndexDir New index directory detected: old=null 
>> new=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/data/index/
>>    [junit4]   2> 886583 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrCore.initIndex WARN [collection1] Solr index directory 
>> '/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/data/index' doesn't 
>> exist. Creating new index...
>>    [junit4]   2> 886584 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.CachingDirectoryFactory.get return new directory for 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/data/index
>>    [junit4]   2> 886584 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
>> org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
>> maxMergeAtOnce=50, maxMergeAtOnceExplicit=35, 
>> maxMergedSegmentMB=15.5849609375, floorSegmentMB=1.583984375, 
>> forceMergeDeletesPctAllowed=11.317874839400897, segmentsPerTier=13.0, 
>> maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0
>>    [junit4]   2> 886603 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
>>    [junit4]   2>                
>> commit{dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  
>> C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/data/index,segFN=segments_1,generation=1}
>>    [junit4]   2> 886603 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
>>    [junit4]   2> 886606 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "nodistrib"
>>    [junit4]   2> 886607 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "dedupe"
>>    [junit4]   2> 886607 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init inserting 
>> DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
>>    [junit4]   2> 886607 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "stored_sig"
>>    [junit4]   2> 886607 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init inserting 
>> DistributedUpdateProcessorFactory into updateRequestProcessorChain 
>> "stored_sig"
>>    [junit4]   2> 886607 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "distrib-dup-test-chain-explicit"
>>    [junit4]   2> 886608 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
>> "distrib-dup-test-chain-implicit"
>>    [junit4]   2> 886608 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasup.UpdateRequestProcessorChain.init inserting 
>> DistributedUpdateProcessorFactory into updateRequestProcessorChain 
>> "distrib-dup-test-chain-implicit"
>>    [junit4]   2> 886608 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain 
>> defined as default, creating implicit default
>>    [junit4]   2> 886609 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
>>    [junit4]   2> 886610 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
>>    [junit4]   2> 886611 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
>>    [junit4]   2> 886611 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
>>    [junit4]   2> 886614 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.RequestHandlers.initHandlersFromConfig Registered paths: 
>> /admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
>>    [junit4]   2> 886615 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrCore.initStatsCache Using default statsCache cache: 
>> org.apache.solr.search.stats.LocalStatsCache
>>    [junit4]   2> 886615 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.UpdateHandler.<init> Using UpdateLog implementation: 
>> org.apache.solr.update.UpdateLog
>>    [junit4]   2> 886615 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
>> numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=256
>>    [junit4]   2> 886616 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.CommitTracker.<init> Hard AutoCommit: disabled
>>    [junit4]   2> 886616 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.CommitTracker.<init> Soft AutoCommit: disabled
>>    [junit4]   2> 886617 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
>> org.apache.lucene.index.AlcoholicMergePolicy: [AlcoholicMergePolicy: 
>> minMergeSize=0, mergeFactor=10, maxMergeSize=23824830, 
>> maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
>> maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
>> noCFSRatio=0.1]
>>    [junit4]   2> 886618 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
>>    [junit4]   2>                
>> commit{dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  
>> C3A1DDED6178C6E2-001/shard-1-001/cores/collection1/data/index,segFN=segments_1,generation=1}
>>    [junit4]   2> 886618 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
>>    [junit4]   2> 886618 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oass.SolrIndexSearcher.<init> Opening Searcher@138e2ed[collection1] main
>>    [junit4]   2> 886618 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.UpdateLog.onFirstSearcher On first searcher opened, looking up max 
>> value of version field
>>    [junit4]   2> 886619 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.VersionInfo.getMaxVersionFromIndex Refreshing highest value of 
>> _version_ for 256 version buckets from index
>>    [junit4]   2> 886619 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.VersionInfo.getMaxVersionFromIndex WARN No terms found for _version_, 
>> cannot seed version bucket highest value from index
>>    [junit4]   2> 886619 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.UpdateLog.seedBucketsWithHighestVersion WARN Could not find max version 
>> in index or recent updates, using new clock 1501773282683125760
>>    [junit4]   2> 886619 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasu.UpdateLog.seedBucketsWithHighestVersion Took 0 ms to seed version 
>> buckets with highest version 1501773282683125760
>>    [junit4]   2> 886619 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oascc.ZkStateReader.readConfigName Load collection config 
>> from:/collections/collection1
>>    [junit4]   2> 886620 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oascc.ZkStateReader.readConfigName path=/collections/collection1 
>> configName=conf1 specified config exists in ZooKeeper
>>    [junit4]   2> 886620 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage 
>> for the RestManager with znodeBase: /configs/conf1
>>    [junit4]   2> 886620 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured 
>> ZooKeeperStorageIO with znodeBase: /configs/conf1
>>    [junit4]   2> 886620 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasr.RestManager.init Initializing RestManager with initArgs: {}
>>    [junit4]   2> 886621 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage.load Reading _rest_managed.json using 
>> ZooKeeperStorageIO:path=/configs/conf1
>>    [junit4]   2> 886621 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found 
>> for znode /configs/conf1/_rest_managed.json
>>    [junit4]   2> 886621 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json 
>> using ZooKeeperStorageIO:path=/configs/conf1
>>    [junit4]   2> 886622 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasr.RestManager.init Initializing 0 registered ManagedResources
>>    [junit4]   2> 886622 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oash.ReplicationHandler.inform Commits will be reserved for  10000
>>    [junit4]   2> 886622 T5882 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.SolrCore.registerSearcher [collection1] Registered new searcher 
>> Searcher@138e2ed[collection1] 
>> main{ExitableDirectoryReader(UninvertingDirectoryReader())}
>>    [junit4]   2> 886623 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.ZkController.getConfDirListeners watch zkdir /configs/conf1
>>    [junit4]   2> 886623 T5881 n:127.0.0.1:34877_mv%2Fls x:collection1 
>> oasc.CoreContainer.registerCore registering core: collection1
>>    [junit4]   2> 886623 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oasc.ZkController.register Register replica - 
>> core:collection1 address:http://127.0.0.1:34877/mv/ls collection:collection1 
>> shard:shard2
>>    [junit4]   2> 886624 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oass.SolrDispatchFilter.init 
>> user.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1
>>    [junit4]   2> 886624 T5824 n:127.0.0.1:34877_mv%2Fls 
>> oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
>>    [junit4]   2> 886624 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oascc.SolrZkClient.makePath makePath: 
>> /collections/collection1/leader_elect/shard2/election
>>    [junit4]   2> 886626 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess 
>> Running the leader process for shard shard2
>>    [junit4]   2> 886627 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
>> /overseer/queue state SyncConnected
>>    [junit4]   2> 886627 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 
>> oasc.ShardLeaderElectionContext.waitForReplicasToComeUp Enough replicas 
>> found to continue.
>>    [junit4]   2> 886628 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I 
>> may be the new leader - try and sync
>>    [junit4]   2> ASYNC  NEW_CORE C1661 name=collection1 
>> org.apache.solr.core.SolrCore@136cc04 
>> url=http://127.0.0.1:34877/mv/ls/collection1 node=127.0.0.1:34877_mv%2Fls 
>> C1661_STATE=coll:collection1 core:collection1 props:{core=collection1, 
>> base_url=http://127.0.0.1:34877/mv/ls, node_name=127.0.0.1:34877_mv%2Fls, 
>> state=down}
>>    [junit4]   2> 886628 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 C1661 oasc.SyncStrategy.sync Sync replicas to 
>> http://127.0.0.1:34877/mv/ls/collection1/
>>    [junit4]   2> 886628 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message 
>> = {
>>    [junit4]   2>          "operation":"leader",
>>    [junit4]   2>          "shard":"shard2",
>>    [junit4]   2>          "collection":"collection1"} current state version: 
>> 4
>>    [junit4]   2> 886628 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 C1661 oasc.SyncStrategy.syncReplicas Sync Success - 
>> now sync replicas to me
>>    [junit4]   2> 886628 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 C1661 oasc.SyncStrategy.syncToMe 
>> http://127.0.0.1:34877/mv/ls/collection1/ has no replicas
>>    [junit4]   2> 886628 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am 
>> the new leader: http://127.0.0.1:34877/mv/ls/collection1/ shard2
>>    [junit4]   2> 886629 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oascc.SolrZkClient.makePath makePath: 
>> /collections/collection1/leaders/shard2
>>    [junit4]   2> 886643 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
>> /overseer/queue state SyncConnected
>>    [junit4]   2> 886643 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message 
>> = {
>>    [junit4]   2>          "operation":"leader",
>>    [junit4]   2>          "shard":"shard2",
>>    [junit4]   2>          "collection":"collection1",
>>    [junit4]   2>          "base_url":"http://127.0.0.1:34877/mv/ls";,
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "state":"active"} current state version: 4
>>    [junit4]   2> 886715 T5824 oas.SolrTestCaseJ4.writeCoreProperties Writing 
>> core.properties file to 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/cores/collection1
>>    [junit4]   2> 886716 T5824 
>> oasc.AbstractFullDistribZkTestBase.createJettys create jetty 2 in directory 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001
>>    [junit4]   2> 886717 T5824 oejs.Server.doStart jetty-9.2.10.v20150310
>>    [junit4]   2> 886719 T5824 oejsh.ContextHandler.doStart Started 
>> o.e.j.s.ServletContextHandler@c4d2f9{/mv/ls,null,AVAILABLE}
>>    [junit4]   2> 886719 T5824 oejs.AbstractConnector.doStart Started 
>> ServerConnector@1fba99a{HTTP/1.1}{127.0.0.1:45238}
>>    [junit4]   2> 886720 T5824 oejs.Server.doStart Started @887678ms
>>    [junit4]   2> 886720 T5824 oascse.JettySolrRunner$1.lifeCycleStarted 
>> Jetty properties: 
>> {solr.data.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/tempDir-001/jetty2, solrconfig=solrconfig.xml, 
>> hostContext=/mv/ls, hostPort=45238, 
>> coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/cores}
>>    [junit4]   2> 886721 T5824 oass.SolrDispatchFilter.init 
>> SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@e2f2a
>>    [junit4]   2> 886721 T5824 oasc.SolrResourceLoader.<init> new 
>> SolrResourceLoader for directory: 
>> '/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/'
>>    [junit4]   2> 886735 T5824 oasc.SolrXmlConfig.fromFile Loading container 
>> configuration from 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/solr.xml
>>    [junit4]   2> 886740 T5824 oasc.CorePropertiesLocator.<init> 
>> Config-defined core root directory: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/cores
>>    [junit4]   2> 886741 T5824 oasc.CoreContainer.<init> New CoreContainer 
>> 15064929
>>    [junit4]   2> 886741 T5824 oasc.CoreContainer.load Loading cores into 
>> CoreContainer 
>> [instanceDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/]
>>    [junit4]   2> 886741 T5824 oasc.CoreContainer.load loading shared 
>> library: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/lib
>>    [junit4]   2> 886742 T5824 oasc.SolrResourceLoader.addToClassLoader WARN 
>> Can't find (or read) directory to add to classloader: lib (resolved as: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/lib).
>>    [junit4]   2> 886745 T5863 oascc.ZkStateReader$2.process A cluster state 
>> change: WatchedEvent state:SyncConnected type:NodeDataChanged 
>> path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 886745 T5880 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 886745 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 886748 T5824 oashc.HttpShardHandlerFactory.init created 
>> with socketTimeout : 90000,urlScheme : ,connTimeout : 
>> 15000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 
>> 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : 
>> -1,fairnessPolicy : false,useRetries : false,
>>    [junit4]   2> 886750 T5824 oasu.UpdateShardHandler.<init> Creating 
>> UpdateShardHandler HTTP client with params: 
>> socketTimeout=340000&connTimeout=45000&retry=true
>>    [junit4]   2> 886751 T5824 oasl.LogWatcher.createWatcher SLF4J impl is 
>> org.slf4j.impl.Log4jLoggerFactory
>>    [junit4]   2> 886751 T5824 oasl.LogWatcher.newRegisteredLogWatcher 
>> Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
>>    [junit4]   2> 886751 T5824 oasc.CoreContainer.load Node Name: 127.0.0.1
>>    [junit4]   2> 886752 T5824 oasc.ZkContainer.initZooKeeper Zookeeper 
>> client=127.0.0.1:53715/solr
>>    [junit4]   2> 886752 T5824 oasc.ZkController.checkChrootPath zkHost 
>> includes chroot
>>    [junit4]   2> 886753 T5824 
>> oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
>> ZkCredentialsProvider
>>    [junit4]   2> 886753 T5824 oascc.ConnectionManager.waitForConnected 
>> Waiting for client to connect to ZooKeeper
>>    [junit4]   2> 886754 T5899 oascc.ConnectionManager.process Watcher 
>> org.apache.solr.common.cloud.ConnectionManager@f76e8 
>> name:ZooKeeperConnection Watcher:127.0.0.1:53715 got event WatchedEvent 
>> state:SyncConnected type:None path:null path:null type:None
>>    [junit4]   2> 886754 T5824 oascc.ConnectionManager.waitForConnected 
>> Client is connected to ZooKeeper
>>    [junit4]   2> 886755 T5824 oascc.SolrZkClient.createZkACLProvider Using 
>> default ZkACLProvider
>>    [junit4]   2> 886756 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oascc.ConnectionManager.waitForConnected Waiting for client to connect to 
>> ZooKeeper
>>    [junit4]   2> 886757 T5902 n:127.0.0.1:45238_mv%2Fls 
>> oascc.ConnectionManager.process Watcher 
>> org.apache.solr.common.cloud.ConnectionManager@5eaeea 
>> name:ZooKeeperConnection Watcher:127.0.0.1:53715/solr got event WatchedEvent 
>> state:SyncConnected type:None path:null path:null type:None
>>    [junit4]   2> 886757 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oascc.ConnectionManager.waitForConnected Client is connected to ZooKeeper
>>    [junit4]   2> 886762 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oascc.ZkStateReader.createClusterStateWatchersAndUpdate Updating cluster 
>> state from ZooKeeper...
>>    [junit4]   2> 886793 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oasc.ZkController.register We are 
>> http://127.0.0.1:34877/mv/ls/collection1/ and leader is 
>> http://127.0.0.1:34877/mv/ls/collection1/
>>    [junit4]   2> 886794 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oasc.ZkController.register No LogReplay needed for 
>> core=collection1 baseURL=http://127.0.0.1:34877/mv/ls
>>    [junit4]   2> 886794 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oasc.ZkController.checkRecovery I am the leader, no 
>> recovery necessary
>>    [junit4]   2> 886794 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oasc.ZkController.publish publishing core=collection1 
>> state=active collection=collection1
>>    [junit4]   2> 886794 T5885 n:127.0.0.1:34877_mv%2Fls c:collection1 
>> s:shard2 x:collection1 oasc.ZkController.publish numShards not found on 
>> descriptor - reading it from system property
>>    [junit4]   2> 886795 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
>> /overseer/queue state SyncConnected
>>    [junit4]   2> 886795 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message 
>> = {
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "core_node_name":"core_node1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:34877/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:34877_mv%2Fls",
>>    [junit4]   2>          "numShards":"2",
>>    [junit4]   2>          "state":"active",
>>    [junit4]   2>          "shard":"shard2",
>>    [junit4]   2>          "collection":"collection1",
>>    [junit4]   2>          "operation":"state"} current state version: 5
>>    [junit4]   2> 886796 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ReplicaMutator.updateState Update state numShards=2 message={
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "core_node_name":"core_node1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:34877/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:34877_mv%2Fls",
>>    [junit4]   2>          "numShards":"2",
>>    [junit4]   2>          "state":"active",
>>    [junit4]   2>          "shard":"shard2",
>>    [junit4]   2>          "collection":"collection1",
>>    [junit4]   2>          "operation":"state"}
>>    [junit4]   2> 886897 T5863 oascc.ZkStateReader$2.process A cluster state 
>> change: WatchedEvent state:SyncConnected type:NodeDataChanged 
>> path:/clusterstate.json, has occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 886897 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 886897 T5902 n:127.0.0.1:45238_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 886897 T5880 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 2)
>>    [junit4]   2> 887764 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oasc.ZkController.createEphemeralLiveNode Register node as live in 
>> ZooKeeper:/live_nodes/127.0.0.1:45238_mv%2Fls
>>    [junit4]   2> 887765 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oascc.SolrZkClient.makePath makePath: /live_nodes/127.0.0.1:45238_mv%2Fls
>>    [junit4]   2> 887767 T5824 n:127.0.0.1:45238_mv%2Fls oasc.Overseer.close 
>> Overseer (id=null) closing
>>    [junit4]   2> 887768 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oasc.CoreContainer.initializeAuthenticationPlugin No authentication plugin 
>> used.
>>    [junit4]   2> 887768 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oasc.CoreContainer.intializeAuthorizationPlugin Security conf doesn't exist. 
>> Skipping setup for authorization module.
>>    [junit4]   2> 887769 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/cores
>>    [junit4]   2> 887770 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
>> config=solrconfig.xml, transient=false, schema=schema.xml, 
>> loadOnStartup=true, 
>> instanceDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/cores/collection1, collection=collection1, 
>> absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/cores/collection1/, coreNodeName=, 
>> dataDir=data/, shard=}
>>    [junit4]   2> 887770 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/cores/collection1/
>>    [junit4]   2> 887771 T5824 n:127.0.0.1:45238_mv%2Fls 
>> oasc.CorePropertiesLocator.discover Found 1 core definitions
>>    [junit4]   2> 887771 T5903 n:127.0.0.1:45238_mv%2Fls c:collection1 
>> x:collection1 oasc.ZkController.publish publishing core=collection1 
>> state=down collection=collection1
>>    [junit4]   2> 887772 T5903 n:127.0.0.1:45238_mv%2Fls c:collection1 
>> x:collection1 oasc.ZkController.publish numShards not found on descriptor - 
>> reading it from system property
>>    [junit4]   2> 887772 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oasc.ZkController.waitForCoreNodeName look for our core node name
>>    [junit4]   2> 887772 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
>> /overseer/queue state SyncConnected
>>    [junit4]   2> 887773 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message 
>> = {
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:45238/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:45238_mv%2Fls",
>>    [junit4]   2>          "numShards":"2",
>>    [junit4]   2>          "state":"down",
>>    [junit4]   2>          "shard":null,
>>    [junit4]   2>          "collection":"collection1",
>>    [junit4]   2>          "operation":"state"} current state version: 6
>>    [junit4]   2> 887773 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ReplicaMutator.updateState Update state numShards=2 message={
>>    [junit4]   2>          "core":"collection1",
>>    [junit4]   2>          "roles":null,
>>    [junit4]   2>          "base_url":"http://127.0.0.1:45238/mv/ls";,
>>    [junit4]   2>          "node_name":"127.0.0.1:45238_mv%2Fls",
>>    [junit4]   2>          "numShards":"2",
>>    [junit4]   2>          "state":"down",
>>    [junit4]   2>          "shard":null,
>>    [junit4]   2>          "collection":"collection1",
>>    [junit4]   2>          "operation":"state"}
>>    [junit4]   2> 887773 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ReplicaMutator.updateState Collection already exists with numShards=2
>>    [junit4]   2> 887773 T5853 n:127.0.0.1:36633_mv%2Fls 
>> oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
>>    [junit4]   2> 887875 T5902 n:127.0.0.1:45238_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 3)
>>    [junit4]   2> 887875 T5852 n:127.0.0.1:36633_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 3)
>>    [junit4]   2> 887875 T5880 n:127.0.0.1:34877_mv%2Fls 
>> oascc.ZkStateReader$2.process A cluster state change: WatchedEvent 
>> state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has 
>> occurred - updating... (live nodes size: 3)
>>    [junit4]   2> 887875 T5863 oascc.ZkStateReader$2.process A cluster state 
>> change: WatchedEvent state:SyncConnected type:NodeDataChanged 
>> path:/clusterstate.json, has occurred - updating... (live nodes size: 3)
>>    [junit4]   2> 888772 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oasc.ZkController.waitForShardId waiting to find shard id in clusterstate 
>> for collection1
>>    [junit4]   2> 888773 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oasc.ZkController.createCollectionZkNode Check for collection 
>> zkNode:collection1
>>    [junit4]   2> 888774 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oasc.ZkController.createCollectionZkNode Collection zkNode exists
>>    [junit4]   2> 888774 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oascc.ZkStateReader.readConfigName Load collection config 
>> from:/collections/collection1
>>    [junit4]   2> 888774 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oascc.ZkStateReader.readConfigName path=/collections/collection1 
>> configName=conf1 specified config exists in ZooKeeper
>>    [junit4]   2> 888774 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: 
>> '/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-2-001/cores/collection1/'
>>    [junit4]   2> 888787 T5903 n:127.0.0.1:45238_mv%2Fls oasc.Config.<init> 
>> loaded config solrconfig.xml with version 0
>>    [junit4]   2> 888792 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
>>    [junit4]   2> 888797 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oasc.SolrConfig.<init> Using Lucene MatchVersion: 6.0.0
>>    [junit4]   2> 888806 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
>>    [junit4]   2> 888807 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oass.IndexSchema.readSchema Reading Solr Schema from 
>> /configs/conf1/schema.xml
>>    [junit4]   2> 888812 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oass.IndexSchema.readSchema [collection1] Schema name=test
>>    [junit4]   2> 888917 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oass.IndexSchema.readSchema default search field in schema is text
>>    [junit4]   2> 888919 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oass.IndexSchema.readSchema unique key field: id
>>    [junit4]   2> 888919 T5903 n:127.0.0.1:45238_mv%2Fls 
>> oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
>> currency.xml
>>    [junit4]
>>
>> [...truncated too long message...]
>>
>> asu.DefaultSolrCoreState.closeIndexWriter closing IndexWriter with 
>> IndexWriterCloser
>>    [junit4]   2> 932016 T5824 c:control_collection s:shard1 x:collection1 
>> oasc.SolrCore.closeSearcher [collection1] Closing main searcher on request.
>>    [junit4]   2> 932029 T5824 c:control_collection s:shard1 x:collection1 
>> oasc.CachingDirectoryFactory.close Closing StandardDirectoryFactory - 2 
>> directories currently being tracked
>>    [junit4]   2> 932030 T5824 c:control_collection s:shard1 x:collection1 
>> oasc.CachingDirectoryFactory.closeCacheValue looking to close 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-4-001/cores/collection1/data/index 
>> [CachedDir<<refCount=0;path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-4-001/cores/collection1/data/index;done=false>>]
>>    [junit4]   2> 932030 T5824 c:control_collection s:shard1 x:collection1 
>> oasc.CachingDirectoryFactory.close Closing directory: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-4-001/cores/collection1/data/index
>>    [junit4]   2> 932030 T5824 c:control_collection s:shard1 x:collection1 
>> oasc.CachingDirectoryFactory.closeCacheValue looking to close 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-4-001/cores/collection1/data 
>> [CachedDir<<refCount=0;path=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-4-001/cores/collection1/data;done=false>>]
>>    [junit4]   2> 932031 T5824 c:control_collection s:shard1 x:collection1 
>> oasc.CachingDirectoryFactory.close Closing directory: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001/shard-4-001/cores/collection1/data
>>    [junit4]   2> 932031 T5824 c:control_collection s:shard1 x:collection1 
>> oasc.Overseer.close Overseer 
>> (id=93860829923311628-127.0.0.1:54025_mv%2Fls-n_0000000004) closing
>>    [junit4]   2> 932032 T6028 n:127.0.0.1:54025_mv%2Fls 
>> oasc.Overseer$ClusterStateUpdater.run Overseer Loop exiting : 
>> 127.0.0.1:54025_mv%2Fls
>>    [junit4]   2> 932041 T5977 n:127.0.0.1:54025_mv%2Fls 
>> oascc.ZkStateReader$3.process WARN ZooKeeper watch triggered, but Solr 
>> cannot talk to ZK
>>    [junit4]   2> 932042 T5824 oejs.AbstractConnector.doStop Stopped 
>> ServerConnector@1ef74a2{HTTP/1.1}{127.0.0.1:0}
>>    [junit4]   2> 932043 T5824 oejsh.ContextHandler.doStop Stopped 
>> o.e.j.s.ServletContextHandler@108dcee{/mv/ls,null,UNAVAILABLE}
>>    [junit4]   2> 932044 T5824 c:control_collection s:shard1 x:collection1 
>> oasc.ZkTestServer.send4LetterWord connecting to 127.0.0.1:53715 53715
>>    [junit4]   2> 932046 T5995 oasc.ZkTestServer.send4LetterWord connecting 
>> to 127.0.0.1:53715 53715
>>    [junit4]   2> 932046 T5995 oasc.ZkTestServer$ZKServerMain.runFromConfig 
>> WARN Watch limit violations:
>>    [junit4]   2>        Maximum concurrent children watches above limit:
>>    [junit4]   2>
>>    [junit4]   2>                2       /solr/overseer/collection-queue-work
>>    [junit4]   2>
>>    [junit4]   2> NOTE: reproduce with: ant test  
>> -Dtestcase=ChaosMonkeyNothingIsSafeTest -Dtests.method=test 
>> -Dtests.seed=C3A1DDED6178C6E2 -Dtests.multiplier=3 -Dtests.slow=true 
>> -Dtests.locale=no_NO -Dtests.timezone=Canada/Central -Dtests.asserts=true 
>> -Dtests.file.encoding=US-ASCII
>>    [junit4] FAILURE 49.2s J1 | ChaosMonkeyNothingIsSafeTest.test <<<
>>    [junit4]    > Throwable #1: java.lang.AssertionError: document count 
>> mismatch.  control=358 sum(shards)=357 cloudClient=357
>>    [junit4]    >        at 
>> __randomizedtesting.SeedInfo.seed([C3A1DDED6178C6E2:4BF5E237CF84AB1A]:0)
>>    [junit4]    >        at 
>> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1345)
>>    [junit4]    >        at 
>> org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:240)
>>    [junit4]    >        at 
>> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
>>    [junit4]    >        at 
>> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
>>    [junit4]    >        at java.lang.Thread.run(Thread.java:745)
>>    [junit4]   2> 932051 T5824 c:control_collection s:shard1 x:collection1 
>> oas.SolrTestCaseJ4.deleteCore ###deleteCore
>>    [junit4]   2> NOTE: leaving temporary files on disk at: 
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
>>  C3A1DDED6178C6E2-001
>>    [junit4]   2> 49167 T5823 ccr.ThreadLeakControl.checkThreadLeaks WARNING 
>> Will linger awaiting termination of 2 leaked thread(s).
>>    [junit4]   2> NOTE: test params are: codec=Asserting(Lucene50): 
>> {rnd_b=Lucene50(blocksize=128), _version_=PostingsFormat(name=Memory 
>> doPackFST= true), a_t=PostingsFormat(name=Memory doPackFST= true), 
>> a_i=Lucene50(blocksize=128), id=Lucene50(blocksize=128)}, docValues:{}, 
>> sim=DefaultSimilarity, locale=no_NO, timezone=Canada/Central
>>    [junit4]   2> NOTE: Linux 3.13.0-53-generic i386/Oracle Corporation 
>> 1.8.0_60-ea (32-bit)/cpus=12,threads=1,free=125935544,total=269508608
>>    [junit4]   2> NOTE: All tests run in this JVM: [ChangedSchemaMergeTest, 
>> SolrPluginUtilsTest, ZkControllerTest, LukeRequestHandlerTest, 
>> TestRebalanceLeaders, OverseerRolesTest, IndexSchemaRuntimeFieldTest, 
>> RequestLoggingTest, TestCollationFieldDocValues, 
>> CachingDirectoryFactoryTest, TestRecoveryHdfs, TestComponentsName, 
>> HdfsNNFailoverTest, JSONWriterTest, ScriptEngineTest, 
>> SpellPossibilityIteratorTest, RuleEngineTest, TestCharFilters, 
>> TestFuzzyAnalyzedSuggestions, AddSchemaFieldsUpdateProcessorFactoryTest, 
>> SliceStateTest, VersionInfoTest, TestRawResponseWriter, TestMacros, 
>> TestFoldingMultitermQuery, TestLRUCache, TestWriterPerf, TestOmitPositions, 
>> DistributedMLTComponentTest, SpellingQueryConverterTest, TestSearchPerf, 
>> HdfsChaosMonkeySafeLeaderTest, ChaosMonkeySafeLeaderTest, 
>> ResourceLoaderTest, TestCursorMarkWithoutUniqueKey, TestHashPartitioner, 
>> TestDistributedMissingSort, TestComplexPhraseQParserPlugin, 
>> RollingRestartTest, ConvertedLegacyTest, TestRandomMergePolicy, 
>> SpellCheckComponentTest, TestSolrDeletionPolicy2, 
>> TestLMDirichletSimilarityFactory, TestSolrQueryParserResource, 
>> TestDefaultStatsCache, TestBlobHandler, HttpPartitionTest, 
>> TestSolrCoreProperties, TestDistributedGrouping, ShowFileRequestHandlerTest, 
>> DocumentBuilderTest, TestManagedSchemaDynamicFieldResource, 
>> TestFieldCollectionResource, TestExceedMaxTermLength, 
>> BasicFunctionalityTest, TestSchemaManager, TestAtomicUpdateErrorCases, 
>> TestSerializedLuceneMatchVersion, TestElisionMultitermQuery, ZkCLITest, 
>> DistributedTermsComponentTest, CloudExitableDirectoryReaderTest, 
>> AnalysisAfterCoreReloadTest, TestIndexSearcher, CoreAdminHandlerTest, 
>> ClusterStateUpdateTest, TestEmbeddedSolrServerConstructors, 
>> TestPivotHelperCode, TestFastLRUCache, TestReplicaProperties, 
>> ShardRoutingCustomTest, InfoHandlerTest, CollectionsAPIDistributedZkTest, 
>> DistributedDebugComponentTest, FieldAnalysisRequestHandlerTest, 
>> BlockDirectoryTest, TestSolrConfigHandlerConcurrent, TestRTGBase, 
>> HdfsCollectionsAPIDistributedZkTest, OpenCloseCoreStressTest, 
>> ShardRoutingTest, FullSolrCloudDistribCmdsTest, TestZkChroot, 
>> TestRandomDVFaceting, TestFaceting, TestRecovery, TestJoin, 
>> TestCoreContainer, TestSolr4Spatial, SolrCmdDistributorTest, TestFiltering, 
>> CurrencyFieldOpenExchangeTest, SolrIndexSplitterTest, SimplePostToolTest, 
>> TestCoreDiscovery, SignatureUpdateProcessorFactoryTest, 
>> TestExtendedDismaxParser, SuggesterFSTTest, SolrRequestParserTest, 
>> SpatialFilterTest, NoCacheHeaderTest, WordBreakSolrSpellCheckerTest, 
>> TestPseudoReturnFields, TestUpdate, FieldMutatingUpdateProcessorTest, 
>> DirectUpdateHandlerOptimizeTest, DefaultValueUpdateProcessorTest, 
>> SortByFunctionTest, DistanceFunctionTest, XsltUpdateRequestHandlerTest, 
>> IndexBasedSpellCheckerTest, TestQueryTypes, RequestHandlersTest, 
>> RequiredFieldsTest, QueryParsingTest, BinaryUpdateRequestHandlerTest, 
>> SearchHandlerTest, TestQuerySenderListener, TestSolrIndexConfig, 
>> CopyFieldTest, BadComponentTest, TestBinaryField, TestConfig, 
>> OutputWriterTest, DirectSolrConnectionTest, TestCodecSupport, 
>> SynonymTokenizerTest, TestSweetSpotSimilarityFactory, 
>> TestLMJelinekMercerSimilarityFactory, TestFastWriter, 
>> OpenExchangeRatesOrgProviderTest, ChaosMonkeyNothingIsSafeTest]
>>    [junit4] Completed [376/494] on J1 in 50.19s, 1 test, 1 failure <<< 
>> FAILURES!
>>
>> [...truncated 377 lines...]
>> BUILD FAILED
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:526: The following 
>> error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:474: The following 
>> error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:61: The following 
>> error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:39: The 
>> following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:229: The 
>> following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/common-build.xml:512: 
>> The following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1415:
>>  The following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:973: 
>> There were test failures: 494 suites, 1967 tests, 1 failure, 57 ignored (25 
>> assumptions)
>>
>> Total time: 42 minutes 41 seconds
>> Build step 'Invoke Ant' marked build as failure
>> Archiving artifacts
>> Recording test results
>> Email was triggered for: Failure - Any
>> Sending email for trigger: Failure - Any
>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to