[ 
https://issues.apache.org/jira/browse/MESOS-4756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Harutyunyan updated MESOS-4756:
-------------------------------------
    Sprint: Mesosphere Sprint 30

> DockerContainerizerTest.ROOT_DOCKER_DockerInspectDiscard is flaky on CentOS 6
> -----------------------------------------------------------------------------
>
>                 Key: MESOS-4756
>                 URL: https://issues.apache.org/jira/browse/MESOS-4756
>             Project: Mesos
>          Issue Type: Bug
>          Components: tests
>    Affects Versions: 0.27
>         Environment: Centos6 (AWS) + GCC 4.9
>            Reporter: Joseph Wu
>            Assignee: Jan Schlicht
>              Labels: mesosphere, tests
>
> {code}
> [ RUN      ] DockerContainerizerTest.ROOT_DOCKER_DockerInspectDiscard
> I0224 17:50:26.577450 17755 leveldb.cpp:174] Opened db in 6.715352ms
> I0224 17:50:26.579607 17755 leveldb.cpp:181] Compacted db in 2.128954ms
> I0224 17:50:26.579648 17755 leveldb.cpp:196] Created db iterator in 16927ns
> I0224 17:50:26.579661 17755 leveldb.cpp:202] Seeked to beginning of db in 
> 1408ns
> I0224 17:50:26.579669 17755 leveldb.cpp:271] Iterated through 0 keys in the 
> db in 343ns
> I0224 17:50:26.579721 17755 replica.cpp:779] Replica recovered with log 
> positions 0 -> 0 with 1 holes and 0 unlearned
> I0224 17:50:26.580185 17776 recover.cpp:447] Starting replica recovery
> I0224 17:50:26.580382 17776 recover.cpp:473] Replica is in EMPTY status
> I0224 17:50:26.581264 17770 replica.cpp:673] Replica in EMPTY status received 
> a broadcasted recover request from (14098)@172.30.2.121:33050
> I0224 17:50:26.581771 17772 recover.cpp:193] Received a recover response from 
> a replica in EMPTY status
> I0224 17:50:26.582188 17771 recover.cpp:564] Updating replica status to 
> STARTING
> I0224 17:50:26.583030 17772 master.cpp:376] Master 
> 00a3ac12-9e76-48f5-92fa-48770b82035d (ip-172-30-2-121.mesosphere.io) started 
> on 172.30.2.121:33050
> I0224 17:50:26.583051 17772 master.cpp:378] Flags at startup: --acls="" 
> --allocation_interval="1secs" --allocator="HierarchicalDRF" 
> --authenticate="true" --authenticate_http="true" --authenticate_slaves="true" 
> --authenticators="crammd5" --authorizers="local" 
> --credentials="/tmp/jSZ9of/credentials" --framework_sorter="drf" 
> --help="false" --hostname_lookup="true" --http_authenticators="basic" 
> --initialize_driver_logging="true" --log_auto_initialize="true" 
> --logbufsecs="0" --logging_level="INFO" --max_completed_frameworks="50" 
> --max_completed_tasks_per_framework="1000" --max_slave_ping_timeouts="5" 
> --quiet="false" --recovery_slave_removal_limit="100%" 
> --registry="replicated_log" --registry_fetch_timeout="1mins" 
> --registry_store_timeout="100secs" --registry_strict="true" 
> --root_submissions="true" --slave_ping_timeout="15secs" 
> --slave_reregister_timeout="10mins" --user_sorter="drf" --version="false" 
> --webui_dir="/usr/local/share/mesos/webui" --work_dir="/tmp/jSZ9of/master" 
> --zk_session_timeout="10secs"
> I0224 17:50:26.583328 17772 master.cpp:423] Master only allowing 
> authenticated frameworks to register
> I0224 17:50:26.583336 17772 master.cpp:428] Master only allowing 
> authenticated slaves to register
> I0224 17:50:26.583343 17772 credentials.hpp:35] Loading credentials for 
> authentication from '/tmp/jSZ9of/credentials'
> I0224 17:50:26.583901 17772 master.cpp:468] Using default 'crammd5' 
> authenticator
> I0224 17:50:26.584022 17772 master.cpp:537] Using default 'basic' HTTP 
> authenticator
> I0224 17:50:26.584141 17772 master.cpp:571] Authorization enabled
> I0224 17:50:26.584234 17770 leveldb.cpp:304] Persisting metadata (8 bytes) to 
> leveldb took 1.955608ms
> I0224 17:50:26.584264 17770 replica.cpp:320] Persisted replica status to 
> STARTING
> I0224 17:50:26.584285 17771 hierarchical.cpp:144] Initialized hierarchical 
> allocator process
> I0224 17:50:26.584295 17773 whitelist_watcher.cpp:77] No whitelist given
> I0224 17:50:26.584463 17775 recover.cpp:473] Replica is in STARTING status
> I0224 17:50:26.585260 17771 replica.cpp:673] Replica in STARTING status 
> received a broadcasted recover request from (14100)@172.30.2.121:33050
> I0224 17:50:26.585553 17777 recover.cpp:193] Received a recover response from 
> a replica in STARTING status
> I0224 17:50:26.586042 17773 recover.cpp:564] Updating replica status to VOTING
> I0224 17:50:26.586091 17770 master.cpp:1712] The newly elected leader is 
> [email protected]:33050 with id 00a3ac12-9e76-48f5-92fa-48770b82035d
> I0224 17:50:26.586122 17770 master.cpp:1725] Elected as the leading master!
> I0224 17:50:26.586146 17770 master.cpp:1470] Recovering from registrar
> I0224 17:50:26.586294 17773 registrar.cpp:307] Recovering registrar
> I0224 17:50:26.588148 17776 leveldb.cpp:304] Persisting metadata (8 bytes) to 
> leveldb took 1.89126ms
> I0224 17:50:26.588171 17776 replica.cpp:320] Persisted replica status to 
> VOTING
> I0224 17:50:26.588260 17772 recover.cpp:578] Successfully joined the Paxos 
> group
> I0224 17:50:26.588440 17772 recover.cpp:462] Recover process terminated
> I0224 17:50:26.588770 17773 log.cpp:659] Attempting to start the writer
> I0224 17:50:26.589782 17770 replica.cpp:493] Replica received implicit 
> promise request from (14101)@172.30.2.121:33050 with proposal 1
> I0224 17:50:26.591498 17770 leveldb.cpp:304] Persisting metadata (8 bytes) to 
> leveldb took 1.690976ms
> I0224 17:50:26.591519 17770 replica.cpp:342] Persisted promised to 1
> I0224 17:50:26.592025 17773 coordinator.cpp:238] Coordinator attempting to 
> fill missing positions
> I0224 17:50:26.593091 17776 replica.cpp:388] Replica received explicit 
> promise request from (14102)@172.30.2.121:33050 for position 0 with proposal 2
> I0224 17:50:26.594964 17776 leveldb.cpp:341] Persisting action (8 bytes) to 
> leveldb took 1.842643ms
> I0224 17:50:26.594988 17776 replica.cpp:712] Persisted action at 0
> I0224 17:50:26.595870 17775 replica.cpp:537] Replica received write request 
> for position 0 from (14103)@172.30.2.121:33050
> I0224 17:50:26.595914 17775 leveldb.cpp:436] Reading position from leveldb 
> took 21421ns
> I0224 17:50:26.597745 17775 leveldb.cpp:341] Persisting action (14 bytes) to 
> leveldb took 1.799795ms
> I0224 17:50:26.597767 17775 replica.cpp:712] Persisted action at 0
> I0224 17:50:26.598217 17772 replica.cpp:691] Replica received learned notice 
> for position 0 from @0.0.0.0:0
> I0224 17:50:26.600085 17772 leveldb.cpp:341] Persisting action (16 bytes) to 
> leveldb took 1.835672ms
> I0224 17:50:26.600106 17772 replica.cpp:712] Persisted action at 0
> I0224 17:50:26.600121 17772 replica.cpp:697] Replica learned NOP action at 
> position 0
> I0224 17:50:26.600606 17773 log.cpp:675] Writer started with ending position 0
> I0224 17:50:26.601505 17774 leveldb.cpp:436] Reading position from leveldb 
> took 25926ns
> I0224 17:50:26.602330 17774 registrar.cpp:340] Successfully fetched the 
> registry (0B) in 15.977984ms
> I0224 17:50:26.602429 17774 registrar.cpp:439] Applied 1 operations in 
> 22557ns; attempting to update the 'registry'
> I0224 17:50:26.602996 17772 log.cpp:683] Attempting to append 210 bytes to 
> the log
> I0224 17:50:26.603103 17777 coordinator.cpp:348] Coordinator attempting to 
> write APPEND action at position 1
> I0224 17:50:26.603703 17771 replica.cpp:537] Replica received write request 
> for position 1 from (14104)@172.30.2.121:33050
> I0224 17:50:26.605535 17771 leveldb.cpp:341] Persisting action (229 bytes) to 
> leveldb took 1.798462ms
> I0224 17:50:26.605557 17771 replica.cpp:712] Persisted action at 1
> I0224 17:50:26.606109 17770 replica.cpp:691] Replica received learned notice 
> for position 1 from @0.0.0.0:0
> I0224 17:50:26.607885 17770 leveldb.cpp:341] Persisting action (231 bytes) to 
> leveldb took 1.746759ms
> I0224 17:50:26.607908 17770 replica.cpp:712] Persisted action at 1
> I0224 17:50:26.607921 17770 replica.cpp:697] Replica learned APPEND action at 
> position 1
> I0224 17:50:26.608687 17772 registrar.cpp:484] Successfully updated the 
> 'registry' in 6.201856ms
> I0224 17:50:26.608826 17772 registrar.cpp:370] Successfully recovered 
> registrar
> I0224 17:50:26.608857 17775 log.cpp:702] Attempting to truncate the log to 1
> I0224 17:50:26.608968 17773 coordinator.cpp:348] Coordinator attempting to 
> write TRUNCATE action at position 2
> I0224 17:50:26.609362 17776 master.cpp:1522] Recovered 0 slaves from the 
> Registry (171B) ; allowing 10mins for slaves to re-register
> I0224 17:50:26.609390 17773 hierarchical.cpp:171] Skipping recovery of 
> hierarchical allocator: nothing to recover
> I0224 17:50:26.609741 17776 replica.cpp:537] Replica received write request 
> for position 2 from (14105)@172.30.2.121:33050
> I0224 17:50:26.611588 17776 leveldb.cpp:341] Persisting action (16 bytes) to 
> leveldb took 1.819377ms
> I0224 17:50:26.611610 17776 replica.cpp:712] Persisted action at 2
> I0224 17:50:26.612218 17777 replica.cpp:691] Replica received learned notice 
> for position 2 from @0.0.0.0:0
> I0224 17:50:26.614109 17777 leveldb.cpp:341] Persisting action (18 bytes) to 
> leveldb took 1.866511ms
> I0224 17:50:26.614161 17777 leveldb.cpp:399] Deleting ~1 keys from leveldb 
> took 28883ns
> I0224 17:50:26.614176 17777 replica.cpp:712] Persisted action at 2
> I0224 17:50:26.614190 17777 replica.cpp:697] Replica learned TRUNCATE action 
> at position 2
> I0224 17:50:26.623827 17770 slave.cpp:193] Slave started on 
> 414)@172.30.2.121:33050
> I0224 17:50:26.623848 17770 slave.cpp:194] Flags at startup: 
> --appc_simple_discovery_uri_prefix="http://"; 
> --appc_store_dir="/tmp/mesos/store/appc" --authenticatee="crammd5" 
> --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" 
> --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" 
> --cgroups_root="mesos" --container_disk_watch_interval="15secs" 
> --containerizers="mesos" 
> --credential="/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/credential"
>  --default_role="*" --disk_watch_interval="1mins" --docker="docker" 
> --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io"; 
> --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" 
> --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" 
> --enforce_container_disk_quota="false" 
> --executor_registration_timeout="1mins" 
> --executor_shutdown_grace_period="5secs" 
> --fetcher_cache_dir="/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/fetch"
>  --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" 
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false" 
> --hostname_lookup="true" --image_provisioner_backend="copy" 
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" 
> --launcher_dir="/mnt/teamcity/work/4240ba9ddd0997c3/build/src" 
> --logbufsecs="0" --logging_level="INFO" 
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs" 
> --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" 
> --recover="reconnect" --recovery_timeout="15mins" 
> --registration_backoff_factor="10ms" 
> --resources="cpus:2;mem:1024;disk:1024;ports:[31000-32000]" 
> --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" 
> --strict="true" --switch_user="true" --systemd_enable_support="true" 
> --systemd_runtime_directory="/run/systemd/system" --version="false" 
> --work_dir="/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv"
> I0224 17:50:26.624110 17770 credentials.hpp:83] Loading credential for 
> authentication from 
> '/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/credential'
> I0224 17:50:26.624255 17770 slave.cpp:324] Slave using credential for: 
> test-principal
> I0224 17:50:26.624414 17770 resources.cpp:576] Parsing resources as JSON 
> failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000]
> Trying semicolon-delimited string format instead
> I0224 17:50:26.624729 17770 slave.cpp:464] Slave resources: cpus(*):2; 
> mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> I0224 17:50:26.624783 17770 slave.cpp:472] Slave attributes: [  ]
> I0224 17:50:26.624791 17770 slave.cpp:477] Slave hostname: 
> ip-172-30-2-121.mesosphere.io
> I0224 17:50:26.625807 17775 state.cpp:58] Recovering state from 
> '/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/meta'
> I0224 17:50:26.625898 17755 sched.cpp:222] Version: 0.28.0
> I0224 17:50:26.626061 17770 status_update_manager.cpp:200] Recovering status 
> update manager
> I0224 17:50:26.626246 17773 docker.cpp:726] Recovering Docker containers
> I0224 17:50:26.626441 17774 sched.cpp:326] New master detected at 
> [email protected]:33050
> I0224 17:50:26.626442 17776 slave.cpp:4565] Finished recovery
> I0224 17:50:26.626569 17774 sched.cpp:382] Authenticating with master 
> [email protected]:33050
> I0224 17:50:26.626585 17774 sched.cpp:389] Using default CRAM-MD5 
> authenticatee
> I0224 17:50:26.626734 17776 slave.cpp:4737] Querying resource estimator for 
> oversubscribable resources
> I0224 17:50:26.626807 17777 authenticatee.cpp:121] Creating new client SASL 
> connection
> I0224 17:50:26.626953 17770 status_update_manager.cpp:174] Pausing sending 
> status updates
> I0224 17:50:26.627019 17775 master.cpp:5526] Authenticating 
> [email protected]:33050
> I0224 17:50:26.627018 17772 slave.cpp:796] New master detected at 
> [email protected]:33050
> I0224 17:50:26.627100 17772 slave.cpp:859] Authenticating with master 
> [email protected]:33050
> I0224 17:50:26.627115 17772 slave.cpp:864] Using default CRAM-MD5 
> authenticatee
> I0224 17:50:26.627128 17777 authenticator.cpp:413] Starting authentication 
> session for crammd5_authenticatee(864)@172.30.2.121:33050
> I0224 17:50:26.627259 17775 authenticatee.cpp:121] Creating new client SASL 
> connection
> I0224 17:50:26.627264 17772 slave.cpp:832] Detecting new master
> I0224 17:50:26.627315 17771 authenticator.cpp:98] Creating new server SASL 
> connection
> I0224 17:50:26.627368 17772 slave.cpp:4751] Received oversubscribable 
> resources  from the resource estimator
> I0224 17:50:26.627434 17777 master.cpp:5526] Authenticating 
> slave(414)@172.30.2.121:33050
> I0224 17:50:26.627445 17774 authenticatee.cpp:212] Received SASL 
> authentication mechanisms: CRAM-MD5
> I0224 17:50:26.627465 17774 authenticatee.cpp:238] Attempting to authenticate 
> with mechanism 'CRAM-MD5'
> I0224 17:50:26.627538 17770 authenticator.cpp:413] Starting authentication 
> session for crammd5_authenticatee(865)@172.30.2.121:33050
> I0224 17:50:26.627662 17776 authenticator.cpp:203] Received SASL 
> authentication start
> I0224 17:50:26.627717 17775 authenticator.cpp:98] Creating new server SASL 
> connection
> I0224 17:50:26.627730 17776 authenticator.cpp:325] Authentication requires 
> more steps
> I0224 17:50:26.627851 17774 authenticatee.cpp:212] Received SASL 
> authentication mechanisms: CRAM-MD5
> I0224 17:50:26.627853 17776 authenticatee.cpp:258] Received SASL 
> authentication step
> I0224 17:50:26.627876 17774 authenticatee.cpp:238] Attempting to authenticate 
> with mechanism 'CRAM-MD5'
> I0224 17:50:26.627986 17774 authenticator.cpp:203] Received SASL 
> authentication start
> I0224 17:50:26.628008 17773 authenticator.cpp:231] Received SASL 
> authentication step
> I0224 17:50:26.628033 17774 authenticator.cpp:325] Authentication requires 
> more steps
> I0224 17:50:26.628039 17773 auxprop.cpp:107] Request to lookup properties for 
> user: 'test-principal' realm: 'ip-172-30-2-121' server FQDN: 
> 'ip-172-30-2-121' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
> I0224 17:50:26.628052 17773 auxprop.cpp:179] Looking up auxiliary property 
> '*userPassword'
> I0224 17:50:26.628080 17773 auxprop.cpp:179] Looking up auxiliary property 
> '*cmusaslsecretCRAM-MD5'
> I0224 17:50:26.628103 17773 auxprop.cpp:107] Request to lookup properties for 
> user: 'test-principal' realm: 'ip-172-30-2-121' server FQDN: 
> 'ip-172-30-2-121' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
> I0224 17:50:26.628118 17773 auxprop.cpp:129] Skipping auxiliary property 
> '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I0224 17:50:26.628129 17773 auxprop.cpp:129] Skipping auxiliary property 
> '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I0224 17:50:26.628128 17774 authenticatee.cpp:258] Received SASL 
> authentication step
> I0224 17:50:26.628144 17773 authenticator.cpp:317] Authentication success
> I0224 17:50:26.628227 17770 authenticatee.cpp:298] Authentication success
> I0224 17:50:26.628288 17774 master.cpp:5556] Successfully authenticated 
> principal 'test-principal' at 
> [email protected]:33050
> I0224 17:50:26.628319 17776 authenticator.cpp:231] Received SASL 
> authentication step
> I0224 17:50:26.628348 17776 auxprop.cpp:107] Request to lookup properties for 
> user: 'test-principal' realm: 'ip-172-30-2-121' server FQDN: 
> 'ip-172-30-2-121' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
> I0224 17:50:26.628347 17770 authenticator.cpp:431] Authentication session 
> cleanup for crammd5_authenticatee(864)@172.30.2.121:33050
> I0224 17:50:26.628368 17776 auxprop.cpp:179] Looking up auxiliary property 
> '*userPassword'
> I0224 17:50:26.628545 17775 sched.cpp:471] Successfully authenticated with 
> master [email protected]:33050
> I0224 17:50:26.628561 17775 sched.cpp:776] Sending SUBSCRIBE call to 
> [email protected]:33050
> I0224 17:50:26.628559 17776 auxprop.cpp:179] Looking up auxiliary property 
> '*cmusaslsecretCRAM-MD5'
> I0224 17:50:26.628589 17776 auxprop.cpp:107] Request to lookup properties for 
> user: 'test-principal' realm: 'ip-172-30-2-121' server FQDN: 
> 'ip-172-30-2-121' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
> I0224 17:50:26.628598 17776 auxprop.cpp:129] Skipping auxiliary property 
> '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I0224 17:50:26.628603 17776 auxprop.cpp:129] Skipping auxiliary property 
> '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I0224 17:50:26.628614 17776 authenticator.cpp:317] Authentication success
> I0224 17:50:26.628679 17773 authenticatee.cpp:298] Authentication success
> I0224 17:50:26.628698 17775 sched.cpp:809] Will retry registration in 
> 1.192267517secs if necessary
> I0224 17:50:26.628798 17770 master.cpp:2280] Received SUBSCRIBE call for 
> framework 'default' at 
> [email protected]:33050
> I0224 17:50:26.628798 17776 authenticator.cpp:431] Authentication session 
> cleanup for crammd5_authenticatee(865)@172.30.2.121:33050
> I0224 17:50:26.628859 17770 master.cpp:1751] Authorizing framework principal 
> 'test-principal' to receive offers for role '*'
> I0224 17:50:26.629012 17770 master.cpp:5556] Successfully authenticated 
> principal 'test-principal' at slave(414)@172.30.2.121:33050
> I0224 17:50:26.629134 17771 slave.cpp:927] Successfully authenticated with 
> master [email protected]:33050
> I0224 17:50:26.629161 17777 master.cpp:2351] Subscribing framework default 
> with checkpointing disabled and capabilities [  ]
> I0224 17:50:26.629262 17771 slave.cpp:1321] Will retry registration in 
> 15.916328ms if necessary
> I0224 17:50:26.629547 17771 hierarchical.cpp:265] Added framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> I0224 17:50:26.629621 17771 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:26.629642 17775 sched.cpp:703] Framework registered with 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> I0224 17:50:26.629658 17771 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:26.629686 17771 hierarchical.cpp:1127] Performed allocation for 0 
> slaves in 115429ns
> I0224 17:50:26.629695 17777 master.cpp:4240] Registering slave at 
> slave(414)@172.30.2.121:33050 (ip-172-30-2-121.mesosphere.io) with id 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0
> I0224 17:50:26.629711 17775 sched.cpp:717] Scheduler::registered took 48960ns
> I0224 17:50:26.630197 17770 registrar.cpp:439] Applied 1 operations in 
> 57113ns; attempting to update the 'registry'
> I0224 17:50:26.630928 17772 log.cpp:683] Attempting to append 396 bytes to 
> the log
> I0224 17:50:26.631023 17777 coordinator.cpp:348] Coordinator attempting to 
> write APPEND action at position 3
> I0224 17:50:26.631654 17774 replica.cpp:537] Replica received write request 
> for position 3 from (14112)@172.30.2.121:33050
> I0224 17:50:26.634012 17774 leveldb.cpp:341] Persisting action (415 bytes) to 
> leveldb took 2.329586ms
> I0224 17:50:26.634033 17774 replica.cpp:712] Persisted action at 3
> I0224 17:50:26.634613 17775 replica.cpp:691] Replica received learned notice 
> for position 3 from @0.0.0.0:0
> I0224 17:50:26.636519 17775 leveldb.cpp:341] Persisting action (417 bytes) to 
> leveldb took 1.868667ms
> I0224 17:50:26.636541 17775 replica.cpp:712] Persisted action at 3
> I0224 17:50:26.636555 17775 replica.cpp:697] Replica learned APPEND action at 
> position 3
> I0224 17:50:26.637675 17771 registrar.cpp:484] Successfully updated the 
> 'registry' in 7.406848ms
> I0224 17:50:26.637861 17772 log.cpp:702] Attempting to truncate the log to 3
> I0224 17:50:26.637969 17770 coordinator.cpp:348] Coordinator attempting to 
> write TRUNCATE action at position 4
> I0224 17:50:26.638437 17770 slave.cpp:3482] Received ping from 
> slave-observer(379)@172.30.2.121:33050
> I0224 17:50:26.638533 17775 master.cpp:4308] Registered slave 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 
> (ip-172-30-2-121.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; 
> ports(*):[31000-32000]
> I0224 17:50:26.638612 17770 slave.cpp:971] Registered with master 
> [email protected]:33050; given slave ID 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0
> I0224 17:50:26.638635 17770 fetcher.cpp:81] Clearing fetcher cache
> I0224 17:50:26.638658 17777 replica.cpp:537] Replica received write request 
> for position 4 from (14113)@172.30.2.121:33050
> I0224 17:50:26.638705 17776 hierarchical.cpp:473] Added slave 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 (ip-172-30-2-121.mesosphere.io) with 
> cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: )
> I0224 17:50:26.638756 17772 status_update_manager.cpp:181] Resuming sending 
> status updates
> I0224 17:50:26.638934 17770 slave.cpp:994] Checkpointing SlaveInfo to 
> '/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/meta/slaves/00a3ac12-9e76-48f5-92fa-48770b82035d-S0/slave.info'
> I0224 17:50:26.639209 17770 slave.cpp:1030] Forwarding total oversubscribed 
> resources 
> I0224 17:50:26.639331 17776 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:26.639363 17776 hierarchical.cpp:1147] Performed allocation for 
> slave 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 in 621054ns
> I0224 17:50:26.639367 17770 master.cpp:4649] Received update of slave 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 
> (ip-172-30-2-121.mesosphere.io) with total oversubscribed resources 
> I0224 17:50:26.639735 17770 master.cpp:5355] Sending 1 offers to framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at 
> [email protected]:33050
> I0224 17:50:26.639755 17771 hierarchical.cpp:531] Slave 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 (ip-172-30-2-121.mesosphere.io) 
> updated with oversubscribed resources  (total: cpus(*):2; mem(*):1024; 
> disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; 
> disk(*):1024; ports(*):[31000-32000])
> I0224 17:50:26.639873 17771 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:26.639907 17771 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:26.639925 17771 hierarchical.cpp:1147] Performed allocation for 
> slave 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 in 139782ns
> I0224 17:50:26.640146 17775 sched.cpp:873] Scheduler::resourceOffers took 
> 95935ns
> I0224 17:50:26.640867 17777 leveldb.cpp:341] Persisting action (16 bytes) to 
> leveldb took 2.180791ms
> I0224 17:50:26.640898 17777 replica.cpp:712] Persisted action at 4
> I0224 17:50:26.641491 17770 replica.cpp:691] Replica received learned notice 
> for position 4 from @0.0.0.0:0
> I0224 17:50:26.641758 17772 master.cpp:3138] Processing ACCEPT call for 
> offers: [ 00a3ac12-9e76-48f5-92fa-48770b82035d-O0 ] on slave 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 
> (ip-172-30-2-121.mesosphere.io) for framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at 
> [email protected]:33050
> I0224 17:50:26.641793 17772 master.cpp:2825] Authorizing framework principal 
> 'test-principal' to launch task 1 as user 'root'
> W0224 17:50:26.643143 17773 validation.cpp:404] Executor e1 for task 1 uses 
> less CPUs (None) than the minimum required (0.01). Please update your 
> executor, as this will be mandatory in future releases.
> W0224 17:50:26.643172 17773 validation.cpp:416] Executor e1 for task 1 uses 
> less memory (None) than the minimum required (32MB). Please update your 
> executor, as this will be mandatory in future releases.
> I0224 17:50:26.643299 17770 leveldb.cpp:341] Persisting action (18 bytes) to 
> leveldb took 1.777824ms
> I0224 17:50:26.643353 17770 leveldb.cpp:399] Deleting ~2 keys from leveldb 
> took 31165ns
> I0224 17:50:26.643368 17770 replica.cpp:712] Persisted action at 4
> I0224 17:50:26.643383 17770 replica.cpp:697] Replica learned TRUNCATE action 
> at position 4
> I0224 17:50:26.643484 17773 master.hpp:176] Adding task 1 with resources 
> cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 (ip-172-30-2-121.mesosphere.io)
> I0224 17:50:26.643645 17773 master.cpp:3623] Launching task 1 of framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at 
> [email protected]:33050 with 
> resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on 
> slave 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 at 
> slave(414)@172.30.2.121:33050 (ip-172-30-2-121.mesosphere.io)
> I0224 17:50:26.643995 17772 slave.cpp:1361] Got assigned task 1 for framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> I0224 17:50:26.644445 17772 slave.cpp:1480] Launching task 1 for framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> I0224 17:50:26.644870 17772 paths.cpp:474] Trying to chown 
> '/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/slaves/00a3ac12-9e76-48f5-92fa-48770b82035d-S0/frameworks/00a3ac12-9e76-48f5-92fa-48770b82035d-0000/executors/e1/runs/fe24672f-728c-4771-9b82-9f9270edfbff'
>  to user 'root'
> I0224 17:50:26.650173 17772 slave.cpp:5367] Launching executor e1 of 
> framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 with resources  in work 
> directory 
> '/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/slaves/00a3ac12-9e76-48f5-92fa-48770b82035d-S0/frameworks/00a3ac12-9e76-48f5-92fa-48770b82035d-0000/executors/e1/runs/fe24672f-728c-4771-9b82-9f9270edfbff'
> I0224 17:50:26.650780 17772 slave.cpp:1698] Queuing task '1' for executor 
> 'e1' of framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> I0224 17:50:26.652282 17772 slave.cpp:749] Successfully attached file 
> '/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/slaves/00a3ac12-9e76-48f5-92fa-48770b82035d-S0/frameworks/00a3ac12-9e76-48f5-92fa-48770b82035d-0000/executors/e1/runs/fe24672f-728c-4771-9b82-9f9270edfbff'
> I0224 17:50:26.654928 17776 docker.cpp:1028] Starting container 
> 'fe24672f-728c-4771-9b82-9f9270edfbff' for executor 'e1' and framework 
> '00a3ac12-9e76-48f5-92fa-48770b82035d-0000'
> I0224 17:50:26.655567 17776 docker.cpp:1053] Running docker -H 
> unix:///var/run/docker.sock inspect tnachen/test-executor:latest
> I0224 17:50:26.771446 17770 docker.cpp:1124] Running docker -H 
> unix:///var/run/docker.sock pull tnachen/test-executor:latest
> I0224 17:50:27.585420 17774 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:27.585474 17774 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:27.585505 17774 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 272928ns
> I0224 17:50:28.585777 17772 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:28.585814 17772 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:28.585839 17772 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 166918ns
> I0224 17:50:29.586848 17777 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:29.586910 17777 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:29.586940 17777 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 287445ns
> I0224 17:50:30.587625 17775 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:30.587679 17775 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:30.587707 17775 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 207639ns
> I0224 17:50:31.588436 17775 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:31.588474 17775 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:31.588495 17775 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 160471ns
> I0224 17:50:32.589864 17770 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:32.589932 17770 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:32.589963 17770 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 331646ns
> I0224 17:50:33.591181 17775 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:33.591239 17775 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:33.591266 17775 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 199094ns
> I0224 17:50:34.592267 17774 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:34.592463 17774 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:34.592488 17774 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 327690ns
> I0224 17:50:35.593335 17771 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:35.593374 17771 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:35.593397 17771 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 166519ns
> I0224 17:50:36.594398 17770 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:36.594434 17770 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:36.594456 17770 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 154878ns
> I0224 17:50:37.595175 17776 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:37.595211 17776 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:37.595265 17776 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 194236ns
> I0224 17:50:38.596570 17775 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:38.596637 17775 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:38.596668 17775 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 294733ns
> I0224 17:50:39.596954 17771 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:39.596999 17771 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:39.597024 17771 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 180045ns
> I0224 17:50:40.597641 17772 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:40.597705 17772 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:40.597735 17772 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 284738ns
> I0224 17:50:41.598578 17774 hierarchical.cpp:1434] No resources available to 
> allocate!
> I0224 17:50:41.598615 17774 hierarchical.cpp:1529] No inverse offers to send 
> out!
> I0224 17:50:41.598639 17774 hierarchical.cpp:1127] Performed allocation for 1 
> slaves in 165034ns
> I0224 17:50:41.627979 17775 slave.cpp:4737] Querying resource estimator for 
> oversubscribable resources
> I0224 17:50:41.628226 17771 slave.cpp:4751] Received oversubscribable 
> resources  from the resource estimator
> I0224 17:50:41.639770 17773 slave.cpp:3482] Received ping from 
> slave-observer(379)@172.30.2.121:33050
> ../../src/tests/containerizer/docker_containerizer_tests.cpp:3589: Failure
> Failed to wait 15secs for statusFailed
> ../../src/tests/containerizer/docker_containerizer_tests.cpp:3576: Failure
> I0224 17:50:41.653586 17770 master.cpp:1213] Framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at 
> [email protected]:33050 disconnected
> Actual function call count doesn't match EXPECT_CALL(sched, 
> statusUpdate(&driver, _))...
>          Expected: to be called once
> I0224 17:50:41.653626 17770 master.cpp:2576] Disconnecting framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at 
> [email protected]:33050
>            Actual: never called - unsatisfied and active
> I0224 17:50:41.653662 17770 master.cpp:2600] Deactivating framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at 
> [email protected]:33050
> I0224 17:50:41.653772 17770 master.cpp:1237] Giving framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at 
> [email protected]:33050 0ns to 
> failover
> I0224 17:50:41.653803 17776 hierarchical.cpp:375] Deactivated framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> ../../src/tests/containerizer/docker_containerizer_tests.cpp:3515: Failure
> Actual function call count doesn't match EXPECT_CALL(*mockDocker, inspect(_, 
> _))...
>          Expected: to be called once
>            Actual: never called - unsatisfied and active
> ../../src/tests/containerizer/docker_containerizer_tests.cpp:3520: Failure
> I0224 17:50:41.655174 17755 docker.cpp:885] Running docker -H 
> unix:///var/run/docker.sock ps -a
> Actual function call count doesn't match EXPECT_CALL(*mockDocker, run(_, _, 
> _, _, _, _, _, _, _))...
>          Expected: to be called once
> I0224 17:50:41.655172 17772 master.cpp:5207] Framework failover timeout, 
> removing framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at 
> [email protected]:33050
>            Actual: never called - unsatisfied and active
> I0224 17:50:41.655208 17772 master.cpp:5938] Removing framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at 
> [email protected]:33050
> I0224 17:50:41.655330 17774 slave.cpp:2079] Asked to shut down framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 by [email protected]:33050
> I0224 17:50:41.657733 17774 slave.cpp:2104] Shutting down framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> I0224 17:50:41.657977 17774 slave.cpp:4198] Shutting down executor 'e1' of 
> framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> W0224 17:50:41.658028 17774 slave.hpp:665] Unable to send event to executor 
> 'e1' of framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000: unknown 
> connection type
> I0224 17:50:41.658149 17772 master.cpp:6450] Updating the state of task 1 of 
> framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (latest state: 
> TASK_KILLED, status update state: TASK_KILLED)
> I0224 17:50:41.659183 17771 hierarchical.cpp:892] Recovered cpus(*):2; 
> mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; 
> mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 from framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> I0224 17:50:41.659298 17772 master.cpp:6516] Removing task 1 with resources 
> cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 on slave 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 
> (ip-172-30-2-121.mesosphere.io)
> I0224 17:50:41.659651 17772 master.cpp:6545] Removing executor 'e1' with 
> resources  of framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 on slave 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 
> (ip-172-30-2-121.mesosphere.io)
> I0224 17:50:41.660135 17775 hierarchical.cpp:326] Removed framework 
> 00a3ac12-9e76-48f5-92fa-48770b82035d-0000
> F0224 17:50:41.660135 17755 logging.cpp:64] RAW: Pure virtual method called
>     @     0x7fa35bb62318  google::LogMessage::Fail()
>     @     0x7fa35bb678ae  google::RawLog__()
>     @     0x7fa35acc04ed  __cxa_pure_virtual
>     @           0xa91676  mesos::internal::tests::Cluster::Slaves::shutdown()
>     @           0xa914e2  mesos::internal::tests::Cluster::Slaves::~Slaves()
>     @           0x9df8a2  mesos::internal::tests::Cluster::~Cluster()
>     @           0x9df925  mesos::internal::tests::MesosTest::~MesosTest()
>     @          0x15552f1  
> mesos::internal::tests::DockerContainerizerTest::~DockerContainerizerTest()
>     @          0x155fa83  
> mesos::internal::tests::DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_Test::~DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_Test()
>     @          0x155fab2  
> mesos::internal::tests::DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_Test::~DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_Test()
>     @          0x16eb6d6  testing::Test::DeleteSelf_()
>     @          0x16f5686  
> testing::internal::HandleSehExceptionsInMethodIfSupported<>()
>     @          0x16f06ea  
> testing::internal::HandleExceptionsInMethodIfSupported<>()
>     @          0x16d2210  testing::TestInfo::Run()
>     @          0x16d27fe  testing::TestCase::Run()
>     @          0x16d914d  testing::internal::UnitTestImpl::RunAllTests()
>     @          0x16f6315  
> testing::internal::HandleSehExceptionsInMethodIfSupported<>()
>     @          0x16f1240  
> testing::internal::HandleExceptionsInMethodIfSupported<>()
>     @          0x16d7e7d  testing::UnitTest::Run()
>     @           0xe3a04e  RUN_ALL_TESTS()
>     @           0xe39c64  main
>     @     0x7fa3562bed5d  __libc_start_main
>     @           0x9bf259  (unknown)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to