[ 
https://issues.apache.org/jira/browse/MESOS-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16706047#comment-16706047
 ] 

Till Toenshoff commented on MESOS-7217:
---------------------------------------

And again on Centos7:
{noformat}
17:12:36  Expected: (0.30) >= (cpuTime), actual: 0.3 vs 0.31{noformat}

> CgroupsIsolatorTest.ROOT_CGROUPS_CFS_EnableCfs is flaky.
> --------------------------------------------------------
>
>                 Key: MESOS-7217
>                 URL: https://issues.apache.org/jira/browse/MESOS-7217
>             Project: Mesos
>          Issue Type: Bug
>    Affects Versions: 1.1.1, 1.8.0
>         Environment: ubuntu-14.04, centos-7
>            Reporter: Till Toenshoff
>            Priority: Major
>              Labels: containerizer, flaky, flaky-test, mesosphere, test
>
> The test CgroupsIsolatorTest.ROOT_CGROUPS_CFS_EnableCfs appears to be flaky 
> on Ubuntu 14.04.
> When failing, the test shows the following:
> {noformat}
> 14:05:48  [ RUN      ] CgroupsIsolatorTest.ROOT_CGROUPS_CFS_EnableCfs
> 14:05:48  I0306 14:05:48.704794 27340 cluster.cpp:158] Creating default 
> 'local' authorizer
> 14:05:48  I0306 14:05:48.716588 27340 leveldb.cpp:174] Opened db in 
> 11.681905ms
> 14:05:48  I0306 14:05:48.718921 27340 leveldb.cpp:181] Compacted db in 
> 2.309404ms
> 14:05:48  I0306 14:05:48.718945 27340 leveldb.cpp:196] Created db iterator in 
> 3075ns
> 14:05:48  I0306 14:05:48.718951 27340 leveldb.cpp:202] Seeked to beginning of 
> db in 558ns
> 14:05:48  I0306 14:05:48.718955 27340 leveldb.cpp:271] Iterated through 0 
> keys in the db in 257ns
> 14:05:48  I0306 14:05:48.718966 27340 replica.cpp:776] Replica recovered with 
> log positions 0 -> 0 with 1 holes and 0 unlearned
> 14:05:48  I0306 14:05:48.719113 27361 recover.cpp:451] Starting replica 
> recovery
> 14:05:48  I0306 14:05:48.719172 27361 recover.cpp:477] Replica is in EMPTY 
> status
> 14:05:48  I0306 14:05:48.719460 27361 replica.cpp:673] Replica in EMPTY 
> status received a broadcasted recover request from 
> __req_res__(6807)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.719537 27363 recover.cpp:197] Received a recover 
> response from a replica in EMPTY status
> 14:05:48  I0306 14:05:48.719625 27365 recover.cpp:568] Updating replica 
> status to STARTING
> 14:05:48  I0306 14:05:48.720384 27361 master.cpp:380] Master 
> cb9586dc-a080-41eb-b5b8-88274f84a20a (ip-10-179-217-143.ec2.internal) started 
> on 10.179.217.143:53643
> 14:05:48  I0306 14:05:48.720404 27361 master.cpp:382] Flags at startup: 
> --acls="" --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" 
> --allocation_interval="1secs" --allocator="HierarchicalDRF" 
> --authenticate_agents="true" --authenticate_frameworks="true" 
> --authenticate_http_frameworks="true" --authenticate_http_readonly="true" 
> --authenticate_http_readwrite="true" --authenticators="crammd5" 
> --authorizers="local" --credentials="/tmp/tzyTvK/credentials" 
> --framework_sorter="drf" --help="false" --hostname_lookup="true" 
> --http_authenticators="basic" --http_framework_authenticators="basic" 
> --initialize_driver_logging="true" --log_auto_initialize="true" 
> --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" 
> --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" 
> --quiet="false" --recovery_agent_removal_limit="100%" 
> --registry="replicated_log" --registry_fetch_timeout="1mins" 
> --registry_gc_interval="15mins" --registry_max_agent_age="2weeks" 
> --registry_max_agent_count="102400" --registry_store_timeout="100secs" 
> --registry_strict="false" --root_submissions="true" --user_sorter="drf" 
> --version="false" --webui_dir="/usr/local/share/mesos/webui" 
> --work_dir="/tmp/tzyTvK/master" --zk_session_timeout="10secs"
> 14:05:48  I0306 14:05:48.720553 27361 master.cpp:432] Master only allowing 
> authenticated frameworks to register
> 14:05:48  I0306 14:05:48.720559 27361 master.cpp:446] Master only allowing 
> authenticated agents to register
> 14:05:48  I0306 14:05:48.720562 27361 master.cpp:459] Master only allowing 
> authenticated HTTP frameworks to register
> 14:05:48  I0306 14:05:48.720566 27361 credentials.hpp:37] Loading credentials 
> for authentication from '/tmp/tzyTvK/credentials'
> 14:05:48  I0306 14:05:48.720655 27361 master.cpp:504] Using default 'crammd5' 
> authenticator
> 14:05:48  I0306 14:05:48.720700 27361 http.cpp:887] Using default 'basic' 
> HTTP authenticator for realm 'mesos-master-readonly'
> 14:05:48  I0306 14:05:48.720767 27361 http.cpp:887] Using default 'basic' 
> HTTP authenticator for realm 'mesos-master-readwrite'
> 14:05:48  I0306 14:05:48.720808 27361 http.cpp:887] Using default 'basic' 
> HTTP authenticator for realm 'mesos-master-scheduler'
> 14:05:48  I0306 14:05:48.720875 27361 master.cpp:584] Authorization enabled
> 14:05:48  I0306 14:05:48.720995 27360 whitelist_watcher.cpp:77] No whitelist 
> given
> 14:05:48  I0306 14:05:48.721005 27364 hierarchical.cpp:149] Initialized 
> hierarchical allocator process
> 14:05:48  I0306 14:05:48.721505 27361 master.cpp:2017] Elected as the leading 
> master!
> 14:05:48  I0306 14:05:48.721521 27361 master.cpp:1560] Recovering from 
> registrar
> 14:05:48  I0306 14:05:48.721565 27362 registrar.cpp:329] Recovering registrar
> 14:05:48  I0306 14:05:48.722306 27365 leveldb.cpp:304] Persisting metadata (8 
> bytes) to leveldb took 2.591886ms
> 14:05:48  I0306 14:05:48.722326 27365 replica.cpp:320] Persisted replica 
> status to STARTING
> 14:05:48  I0306 14:05:48.722370 27365 recover.cpp:477] Replica is in STARTING 
> status
> 14:05:48  I0306 14:05:48.722689 27365 replica.cpp:673] Replica in STARTING 
> status received a broadcasted recover request from 
> __req_res__(6808)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.722760 27363 recover.cpp:197] Received a recover 
> response from a replica in STARTING status
> 14:05:48  I0306 14:05:48.722872 27365 recover.cpp:568] Updating replica 
> status to VOTING
> 14:05:48  I0306 14:05:48.724920 27364 leveldb.cpp:304] Persisting metadata (8 
> bytes) to leveldb took 1.914679ms
> 14:05:48  I0306 14:05:48.724942 27364 replica.cpp:320] Persisted replica 
> status to VOTING
> 14:05:48  I0306 14:05:48.724972 27364 recover.cpp:582] Successfully joined 
> the Paxos group
> 14:05:48  I0306 14:05:48.725011 27364 recover.cpp:466] Recover process 
> terminated
> 14:05:48  I0306 14:05:48.725160 27359 log.cpp:553] Attempting to start the 
> writer
> 14:05:48  I0306 14:05:48.725522 27361 replica.cpp:493] Replica received 
> implicit promise request from __req_res__(6809)@10.179.217.143:53643 with 
> proposal 1
> 14:05:48  I0306 14:05:48.727613 27361 leveldb.cpp:304] Persisting metadata (8 
> bytes) to leveldb took 2.069982ms
> 14:05:48  I0306 14:05:48.727633 27361 replica.cpp:342] Persisted promised to 1
> 14:05:48  I0306 14:05:48.727924 27359 coordinator.cpp:238] Coordinator 
> attempting to fill missing positions
> 14:05:48  I0306 14:05:48.728325 27362 replica.cpp:388] Replica received 
> explicit promise request from __req_res__(6810)@10.179.217.143:53643 for 
> position 0 with proposal 2
> 14:05:48  I0306 14:05:48.730612 27362 leveldb.cpp:341] Persisting action (8 
> bytes) to leveldb took 2.262763ms
> 14:05:48  I0306 14:05:48.730634 27362 replica.cpp:708] Persisted action NOP 
> at position 0
> 14:05:48  I0306 14:05:48.731021 27363 replica.cpp:537] Replica received write 
> request for position 0 from __req_res__(6811)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.731055 27363 leveldb.cpp:436] Reading position from 
> leveldb took 11523ns
> 14:05:48  I0306 14:05:48.731722 27363 leveldb.cpp:341] Persisting action (14 
> bytes) to leveldb took 651237ns
> 14:05:48  I0306 14:05:48.731740 27363 replica.cpp:708] Persisted action NOP 
> at position 0
> 14:05:48  I0306 14:05:48.731984 27363 replica.cpp:691] Replica received 
> learned notice for position 0 from @0.0.0.0:0
> 14:05:48  I0306 14:05:48.732815 27363 leveldb.cpp:341] Persisting action (16 
> bytes) to leveldb took 805568ns
> 14:05:48  I0306 14:05:48.732832 27363 replica.cpp:708] Persisted action NOP 
> at position 0
> 14:05:48  I0306 14:05:48.733022 27361 log.cpp:569] Writer started with ending 
> position 0
> 14:05:48  I0306 14:05:48.733278 27366 leveldb.cpp:436] Reading position from 
> leveldb took 8543ns
> 14:05:48  I0306 14:05:48.733482 27360 registrar.cpp:362] Successfully fetched 
> the registry (0B) in 11.895808ms
> 14:05:48  I0306 14:05:48.733516 27360 registrar.cpp:461] Applied 1 operations 
> in 2454ns; attempting to update the registry
> 14:05:48  I0306 14:05:48.733669 27365 log.cpp:577] Attempting to append 215 
> bytes to the log
> 14:05:48  I0306 14:05:48.733732 27359 coordinator.cpp:348] Coordinator 
> attempting to write APPEND action at position 1
> 14:05:48  I0306 14:05:48.733971 27366 replica.cpp:537] Replica received write 
> request for position 1 from __req_res__(6812)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.736279 27366 leveldb.cpp:341] Persisting action (234 
> bytes) to leveldb took 2.278147ms
> 14:05:48  I0306 14:05:48.736300 27366 replica.cpp:708] Persisted action 
> APPEND at position 1
> 14:05:48  I0306 14:05:48.736515 27366 replica.cpp:691] Replica received 
> learned notice for position 1 from @0.0.0.0:0
> 14:05:48  I0306 14:05:48.738512 27366 leveldb.cpp:341] Persisting action (236 
> bytes) to leveldb took 1.972601ms
> 14:05:48  I0306 14:05:48.738533 27366 replica.cpp:708] Persisted action 
> APPEND at position 1
> 14:05:48  I0306 14:05:48.738761 27362 registrar.cpp:506] Successfully updated 
> the registry in 5.23008ms
> 14:05:48  I0306 14:05:48.738827 27362 registrar.cpp:392] Successfully 
> recovered registrar
> 14:05:48  I0306 14:05:48.738864 27363 log.cpp:596] Attempting to truncate the 
> log to 1
> 14:05:48  I0306 14:05:48.738960 27365 coordinator.cpp:348] Coordinator 
> attempting to write TRUNCATE action at position 2
> 14:05:48  I0306 14:05:48.739085 27366 master.cpp:1676] Recovered 0 agents 
> from the registry (176B); allowing 10mins for agents to re-register
> 14:05:48  I0306 14:05:48.739116 27359 hierarchical.cpp:176] Skipping recovery 
> of hierarchical allocator: nothing to recover
> 14:05:48  I0306 14:05:48.739374 27361 replica.cpp:537] Replica received write 
> request for position 2 from __req_res__(6813)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.740123 27361 leveldb.cpp:341] Persisting action (16 
> bytes) to leveldb took 723242ns
> 14:05:48  I0306 14:05:48.740141 27361 replica.cpp:708] Persisted action 
> TRUNCATE at position 2
> 14:05:48  I0306 14:05:48.740501 27359 replica.cpp:691] Replica received 
> learned notice for position 2 from @0.0.0.0:0
> 14:05:48  I0306 14:05:48.742621 27359 leveldb.cpp:341] Persisting action (18 
> bytes) to leveldb took 2.098055ms
> 14:05:48  I0306 14:05:48.742652 27359 leveldb.cpp:399] Deleting ~1 keys from 
> leveldb took 10238ns
> 14:05:48  I0306 14:05:48.742660 27359 replica.cpp:708] Persisted action 
> TRUNCATE at position 2
> 14:05:48  I0306 14:05:48.760959 27340 containerizer.cpp:200] Using isolation: 
> cgroups/cpu,filesystem/posix,network/cni
> 14:05:48  I0306 14:05:48.764344 27340 linux_launcher.cpp:150] Using 
> /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
> 14:05:48  I0306 14:05:48.771309 27340 cluster.cpp:435] Creating default 
> 'local' authorizer
> 14:05:48  I0306 14:05:48.771692 27361 slave.cpp:208] Mesos agent started on 
> (529)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.771708 27361 slave.cpp:209] Flags at startup: 
> --acls="" --agent_subsystems="memory,cpuacct" 
> --appc_simple_discovery_uri_prefix="http://"; 
> --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="true" 
> --authenticate_http_readwrite="true" --authenticatee="crammd5" 
> --authentication_backoff_factor="1secs" --authorizer="local" 
> --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" 
> --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" 
> --cgroups_root="mesos_test_e2d11f9a-8bf4-444b-a726-e3f0422ac828" 
> --container_disk_watch_interval="15secs" --containerizers="mesos" 
> --credential="/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_OeD88C/credential"
>  --default_role="*" --disk_watch_interval="1mins" --docker="docker" 
> --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io"; 
> --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" 
> --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" 
> --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" 
> --enforce_container_disk_quota="false" 
> --executor_registration_timeout="1mins" 
> --executor_shutdown_grace_period="5secs" 
> --fetcher_cache_dir="/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_OeD88C/fetch"
>  --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" 
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false" 
> --hostname_lookup="true" --http_authenticators="basic" 
> --http_command_executor="false" 
> --http_credentials="/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_OeD88C/http_credentials"
>  --image_provisioner_backend="copy" --initialize_driver_logging="true" 
> --isolation="cgroups/cpu,cgroups/mem" --launcher="linux" 
> --launcher_dir="/home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/Plain/label/mesos-ec2-ubuntu-14.04/mesos/build/src"
>  --logbufsecs="0" --logging_level="INFO" 
> --max_completed_executors_per_framework="150" 
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs" 
> --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" 
> --recover="reconnect" --recovery_timeout="15mins" 
> --registration_backoff_factor="10ms" 
> --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]" 
> --revocable_cpu_low_priority="true" 
> --runtime_dir="/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_OeD88C" 
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" 
> --systemd_enable_support="true" 
> --systemd_runtime_directory="/run/systemd/system" --version="false" 
> --work_dir="/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9"
> 14:05:48  I0306 14:05:48.771934 27361 slave.cpp:229] Moving agent process 
> into its own cgroup for subsystem: memory
> 14:05:48  I0306 14:05:48.772549 27340 sched.cpp:226] Version: 1.1.1
> 14:05:48  I0306 14:05:48.772698 27365 sched.cpp:330] New master detected at 
> master@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.772733 27365 sched.cpp:396] Authenticating with 
> master master@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.772744 27365 sched.cpp:403] Using default CRAM-MD5 
> authenticatee
> 14:05:48  I0306 14:05:48.772835 27360 authenticatee.cpp:121] Creating new 
> client SASL connection
> 14:05:48  I0306 14:05:48.773182 27366 master.cpp:6690] Authenticating 
> scheduler-05d2b6b3-3a4c-449d-a85c-2580da6bf6a8@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.773252 27366 authenticator.cpp:414] Starting 
> authentication session for crammd5-authenticatee(1090)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.773360 27365 authenticator.cpp:98] Creating new 
> server SASL connection
> 14:05:48  I0306 14:05:48.773644 27365 authenticatee.cpp:213] Received SASL 
> authentication mechanisms: CRAM-MD5
> 14:05:48  I0306 14:05:48.773666 27365 authenticatee.cpp:239] Attempting to 
> authenticate with mechanism 'CRAM-MD5'
> 14:05:48  I0306 14:05:48.773741 27363 authenticator.cpp:204] Received SASL 
> authentication start
> 14:05:48  I0306 14:05:48.773782 27363 authenticator.cpp:326] Authentication 
> requires more steps
> 14:05:48  I0306 14:05:48.773838 27363 authenticatee.cpp:259] Received SASL 
> authentication step
> 14:05:48  I0306 14:05:48.773910 27363 authenticator.cpp:232] Received SASL 
> authentication step
> 14:05:48  I0306 14:05:48.773937 27363 auxprop.cpp:109] Request to lookup 
> properties for user: 'test-principal' realm: 'ip-10-179-217-143.ec2.internal' 
> server FQDN: 'ip-10-179-217-143.ec2.internal' 
> SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false 
> SASL_AUXPROP_AUTHZID: false 
> 14:05:48  I0306 14:05:48.773946 27363 auxprop.cpp:181] Looking up auxiliary 
> property '*userPassword'
> 14:05:48  I0306 14:05:48.773954 27363 auxprop.cpp:181] Looking up auxiliary 
> property '*cmusaslsecretCRAM-MD5'
> 14:05:48  I0306 14:05:48.773967 27363 auxprop.cpp:109] Request to lookup 
> properties for user: 'test-principal' realm: 'ip-10-179-217-143.ec2.internal' 
> server FQDN: 'ip-10-179-217-143.ec2.internal' 
> SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false 
> SASL_AUXPROP_AUTHZID: true 
> 14:05:48  I0306 14:05:48.773990 27363 auxprop.cpp:131] Skipping auxiliary 
> property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> 14:05:48  I0306 14:05:48.773999 27363 auxprop.cpp:131] Skipping auxiliary 
> property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> 14:05:48  I0306 14:05:48.774013 27363 authenticator.cpp:318] Authentication 
> success
> 14:05:48  I0306 14:05:48.774075 27360 authenticatee.cpp:299] Authentication 
> success
> 14:05:48  I0306 14:05:48.774096 27363 authenticator.cpp:432] Authentication 
> session cleanup for crammd5-authenticatee(1090)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.774123 27362 master.cpp:6720] Successfully 
> authenticated principal 'test-principal' at 
> scheduler-05d2b6b3-3a4c-449d-a85c-2580da6bf6a8@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.774257 27360 sched.cpp:502] Successfully 
> authenticated with master master@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.774276 27360 sched.cpp:820] Sending SUBSCRIBE call 
> to master@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.774322 27360 sched.cpp:853] Will retry registration 
> in 1.670188134secs if necessary
> 14:05:48  I0306 14:05:48.774370 27363 master.cpp:2596] Received SUBSCRIBE 
> call for framework 'default' at 
> scheduler-05d2b6b3-3a4c-449d-a85c-2580da6bf6a8@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.774399 27363 master.cpp:2053] Authorizing framework 
> principal 'test-principal' to receive offers for role '*'
> 14:05:48  I0306 14:05:48.774510 27363 master.cpp:2672] Subscribing framework 
> default with checkpointing disabled and capabilities [  ]
> 14:05:48  I0306 14:05:48.774621 27362 hierarchical.cpp:275] Added framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.774654 27362 hierarchical.cpp:1694] No allocations 
> performed
> 14:05:48  I0306 14:05:48.774664 27362 hierarchical.cpp:1789] No inverse 
> offers to send out!
> 14:05:48  I0306 14:05:48.774673 27362 hierarchical.cpp:1286] Performed 
> allocation for 0 agents in 32114ns
> 14:05:48  I0306 14:05:48.774704 27363 sched.cpp:743] Framework registered 
> with cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.774740 27363 sched.cpp:757] Scheduler::registered 
> took 12841ns
> 14:05:48  I0306 14:05:48.777748 27361 slave.cpp:229] Moving agent process 
> into its own cgroup for subsystem: cpuacct
> 14:05:48  I0306 14:05:48.781378 27361 credentials.hpp:86] Loading credential 
> for authentication from 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_OeD88C/credential'
> 14:05:48  I0306 14:05:48.781430 27361 slave.cpp:346] Agent using credential 
> for: test-principal
> 14:05:48  I0306 14:05:48.781446 27361 credentials.hpp:37] Loading credentials 
> for authentication from 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_OeD88C/http_credentials'
> 14:05:48  I0306 14:05:48.781498 27361 http.cpp:887] Using default 'basic' 
> HTTP authenticator for realm 'mesos-agent-readonly'
> 14:05:48  I0306 14:05:48.781536 27361 http.cpp:887] Using default 'basic' 
> HTTP authenticator for realm 'mesos-agent-readwrite'
> 14:05:48  I0306 14:05:48.781944 27361 slave.cpp:533] Agent resources: 
> cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> 14:05:48  I0306 14:05:48.781976 27361 slave.cpp:541] Agent attributes: [  ]
> 14:05:48  I0306 14:05:48.781980 27361 slave.cpp:546] Agent hostname: 
> ip-10-179-217-143.ec2.internal
> 14:05:48  I0306 14:05:48.782248 27361 state.cpp:57] Recovering state from 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9/meta'
> 14:05:48  I0306 14:05:48.782367 27359 status_update_manager.cpp:203] 
> Recovering status update manager
> 14:05:48  I0306 14:05:48.782466 27359 containerizer.cpp:555] Recovering 
> containerizer
> 14:05:48  I0306 14:05:48.783879 27363 provisioner.cpp:253] Provisioner 
> recovery complete
> 14:05:48  I0306 14:05:48.783998 27366 slave.cpp:5281] Finished recovery
> 14:05:48  I0306 14:05:48.784171 27366 slave.cpp:5455] Querying resource 
> estimator for oversubscribable resources
> 14:05:48  I0306 14:05:48.784260 27366 slave.cpp:915] New master detected at 
> master@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.784276 27365 status_update_manager.cpp:177] Pausing 
> sending status updates
> 14:05:48  I0306 14:05:48.784276 27366 slave.cpp:974] Authenticating with 
> master master@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.784310 27366 slave.cpp:985] Using default CRAM-MD5 
> authenticatee
> 14:05:48  I0306 14:05:48.784356 27366 slave.cpp:947] Detecting new master
> 14:05:48  I0306 14:05:48.784373 27361 authenticatee.cpp:121] Creating new 
> client SASL connection
> 14:05:48  I0306 14:05:48.784402 27366 slave.cpp:5469] Received 
> oversubscribable resources {} from the resource estimator
> 14:05:48  I0306 14:05:48.784680 27361 master.cpp:6690] Authenticating 
> slave(529)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.784729 27366 authenticator.cpp:414] Starting 
> authentication session for crammd5-authenticatee(1091)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.784790 27366 authenticator.cpp:98] Creating new 
> server SASL connection
> 14:05:48  I0306 14:05:48.785076 27366 authenticatee.cpp:213] Received SASL 
> authentication mechanisms: CRAM-MD5
> 14:05:48  I0306 14:05:48.785099 27366 authenticatee.cpp:239] Attempting to 
> authenticate with mechanism 'CRAM-MD5'
> 14:05:48  I0306 14:05:48.785142 27366 authenticator.cpp:204] Received SASL 
> authentication start
> 14:05:48  I0306 14:05:48.785173 27366 authenticator.cpp:326] Authentication 
> requires more steps
> 14:05:48  I0306 14:05:48.785213 27366 authenticatee.cpp:259] Received SASL 
> authentication step
> 14:05:48  I0306 14:05:48.785275 27362 authenticator.cpp:232] Received SASL 
> authentication step
> 14:05:48  I0306 14:05:48.785302 27362 auxprop.cpp:109] Request to lookup 
> properties for user: 'test-principal' realm: 'ip-10-179-217-143.ec2.internal' 
> server FQDN: 'ip-10-179-217-143.ec2.internal' 
> SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false 
> SASL_AUXPROP_AUTHZID: false 
> 14:05:48  I0306 14:05:48.785310 27362 auxprop.cpp:181] Looking up auxiliary 
> property '*userPassword'
> 14:05:48  I0306 14:05:48.785316 27362 auxprop.cpp:181] Looking up auxiliary 
> property '*cmusaslsecretCRAM-MD5'
> 14:05:48  I0306 14:05:48.785321 27362 auxprop.cpp:109] Request to lookup 
> properties for user: 'test-principal' realm: 'ip-10-179-217-143.ec2.internal' 
> server FQDN: 'ip-10-179-217-143.ec2.internal' 
> SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false 
> SASL_AUXPROP_AUTHZID: true 
> 14:05:48  I0306 14:05:48.785326 27362 auxprop.cpp:131] Skipping auxiliary 
> property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> 14:05:48  I0306 14:05:48.785329 27362 auxprop.cpp:131] Skipping auxiliary 
> property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> 14:05:48  I0306 14:05:48.785336 27362 authenticator.cpp:318] Authentication 
> success
> 14:05:48  I0306 14:05:48.785372 27362 authenticatee.cpp:299] Authentication 
> success
> 14:05:48  I0306 14:05:48.785393 27366 master.cpp:6720] Successfully 
> authenticated principal 'test-principal' at slave(529)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.785429 27364 authenticator.cpp:432] Authentication 
> session cleanup for crammd5-authenticatee(1091)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.785457 27362 slave.cpp:1069] Successfully 
> authenticated with master master@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.785509 27362 slave.cpp:1475] Will retry registration 
> in 1.486346ms if necessary
> 14:05:48  I0306 14:05:48.785580 27359 master.cpp:5113] Registering agent at 
> slave(529)@10.179.217.143:53643 (ip-10-179-217-143.ec2.internal) with id 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0
> 14:05:48  I0306 14:05:48.785681 27359 registrar.cpp:461] Applied 1 operations 
> in 10583ns; attempting to update the registry
> 14:05:48  I0306 14:05:48.785867 27364 log.cpp:577] Attempting to append 402 
> bytes to the log
> 14:05:48  I0306 14:05:48.786018 27360 coordinator.cpp:348] Coordinator 
> attempting to write APPEND action at position 3
> 14:05:48  I0306 14:05:48.786316 27363 replica.cpp:537] Replica received write 
> request for position 3 from __req_res__(6814)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.786973 27363 leveldb.cpp:341] Persisting action (421 
> bytes) to leveldb took 634044ns
> 14:05:48  I0306 14:05:48.786991 27363 replica.cpp:708] Persisted action 
> APPEND at position 3
> 14:05:48  I0306 14:05:48.787196 27363 replica.cpp:691] Replica received 
> learned notice for position 3 from @0.0.0.0:0
> 14:05:48  I0306 14:05:48.787875 27361 slave.cpp:1475] Will retry registration 
> in 28.634219ms if necessary
> 14:05:48  I0306 14:05:48.787912 27359 master.cpp:5101] Ignoring register 
> agent message from slave(529)@10.179.217.143:53643 
> (ip-10-179-217-143.ec2.internal) as admission is already in progress
> 14:05:48  I0306 14:05:48.789623 27363 leveldb.cpp:341] Persisting action (423 
> bytes) to leveldb took 2.405076ms
> 14:05:48  I0306 14:05:48.789644 27363 replica.cpp:708] Persisted action 
> APPEND at position 3
> 14:05:48  I0306 14:05:48.789880 27361 registrar.cpp:506] Successfully updated 
> the registry in 4.174848ms
> 14:05:48  I0306 14:05:48.789963 27365 log.cpp:596] Attempting to truncate the 
> log to 3
> 14:05:48  I0306 14:05:48.790004 27365 coordinator.cpp:348] Coordinator 
> attempting to write TRUNCATE action at position 4
> 14:05:48  I0306 14:05:48.790079 27363 slave.cpp:4133] Received ping from 
> slave-observer(501)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.790103 27362 master.cpp:5184] Registered agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 at slave(529)@10.179.217.143:53643 
> (ip-10-179-217-143.ec2.internal) with cpus(*):2; mem(*):1024; disk(*):1024; 
> ports(*):[31000-32000]
> 14:05:48  I0306 14:05:48.790155 27363 slave.cpp:1115] Registered with master 
> master@10.179.217.143:53643; given agent ID 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0
> 14:05:48  I0306 14:05:48.790171 27363 fetcher.cpp:90] Clearing fetcher cache
> 14:05:48  I0306 14:05:48.790175 27361 hierarchical.cpp:485] Added agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 (ip-10-179-217-143.ec2.internal) with 
> cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {})
> 14:05:48  I0306 14:05:48.790240 27359 status_update_manager.cpp:184] Resuming 
> sending status updates
> 14:05:48  I0306 14:05:48.790323 27366 replica.cpp:537] Replica received write 
> request for position 4 from __req_res__(6815)@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.790338 27363 slave.cpp:1138] Checkpointing SlaveInfo 
> to 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9/meta/slaves/cb9586dc-a080-41eb-b5b8-88274f84a20a-S0/slave.info'
> 14:05:48  I0306 14:05:48.790350 27361 hierarchical.cpp:1789] No inverse 
> offers to send out!
> 14:05:48  I0306 14:05:48.790365 27361 hierarchical.cpp:1309] Performed 
> allocation for agent cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 in 167885ns
> 14:05:48  I0306 14:05:48.790473 27363 slave.cpp:1175] Forwarding total 
> oversubscribed resources {}
> 14:05:48  I0306 14:05:48.790477 27365 master.cpp:6519] Sending 1 offers to 
> framework cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 (default) at 
> scheduler-05d2b6b3-3a4c-449d-a85c-2580da6bf6a8@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.790561 27365 master.cpp:5581] Received update of 
> agent cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 at 
> slave(529)@10.179.217.143:53643 (ip-10-179-217-143.ec2.internal) with total 
> oversubscribed resources {}
> 14:05:48  I0306 14:05:48.790627 27361 sched.cpp:917] 
> Scheduler::resourceOffers took 24859ns
> 14:05:48  I0306 14:05:48.790668 27359 hierarchical.cpp:555] Agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 (ip-10-179-217-143.ec2.internal) 
> updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; 
> disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; 
> disk(*):1024; ports(*):[31000-32000])
> 14:05:48  I0306 14:05:48.790725 27359 hierarchical.cpp:1694] No allocations 
> performed
> 14:05:48  I0306 14:05:48.790740 27359 hierarchical.cpp:1789] No inverse 
> offers to send out!
> 14:05:48  I0306 14:05:48.790748 27359 hierarchical.cpp:1309] Performed 
> allocation for agent cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 in 44409ns
> 14:05:48  I0306 14:05:48.791046 27364 master.cpp:3554] Processing ACCEPT call 
> for offers: [ cb9586dc-a080-41eb-b5b8-88274f84a20a-O0 ] on agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 at slave(529)@10.179.217.143:53643 
> (ip-10-179-217-143.ec2.internal) for framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 (default) at 
> scheduler-05d2b6b3-3a4c-449d-a85c-2580da6bf6a8@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.791075 27364 master.cpp:3159] Authorizing framework 
> principal 'test-principal' to launch task 44672e61-7a65-4d52-a2cc-07716edc99e1
> 14:05:48  I0306 14:05:48.791399 27359 master.cpp:8272] Adding task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 with resources cpus(*):0.5 on agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 (ip-10-179-217-143.ec2.internal)
> 14:05:48  I0306 14:05:48.791431 27359 master.cpp:4192] Launching task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 (default) at 
> scheduler-05d2b6b3-3a4c-449d-a85c-2580da6bf6a8@10.179.217.143:53643 with 
> resources cpus(*):0.5 on agent cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 at 
> slave(529)@10.179.217.143:53643 (ip-10-179-217-143.ec2.internal)
> 14:05:48  I0306 14:05:48.791615 27361 slave.cpp:1539] Got assigned task 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' for framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.791683 27363 hierarchical.cpp:1018] Recovered 
> cpus(*):1.5; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: 
> cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: 
> cpus(*):0.5) on agent cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 from framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.791724 27363 hierarchical.cpp:1055] Framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 filtered agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 for 5secs
> 14:05:48  I0306 14:05:48.791806 27361 slave.cpp:1701] Launching task 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' for framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.792166 27361 paths.cpp:536] Trying to chown 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9/slaves/cb9586dc-a080-41eb-b5b8-88274f84a20a-S0/frameworks/cb9586dc-a080-41eb-b5b8-88274f84a20a-0000/executors/44672e61-7a65-4d52-a2cc-07716edc99e1/runs/bd7cebb0-9176-4842-b43a-764bec752f08'
>  to user 'root'
> 14:05:48  I0306 14:05:48.793823 27366 leveldb.cpp:341] Persisting action (16 
> bytes) to leveldb took 3.353898ms
> 14:05:48  I0306 14:05:48.793866 27366 replica.cpp:708] Persisted action 
> TRUNCATE at position 4
> 14:05:48  I0306 14:05:48.794509 27364 replica.cpp:691] Replica received 
> learned notice for position 4 from @0.0.0.0:0
> 14:05:48  I0306 14:05:48.796682 27361 slave.cpp:6179] Launching executor 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 with resources cpus(*):0.1; 
> mem(*):32 in work directory 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9/slaves/cb9586dc-a080-41eb-b5b8-88274f84a20a-S0/frameworks/cb9586dc-a080-41eb-b5b8-88274f84a20a-0000/executors/44672e61-7a65-4d52-a2cc-07716edc99e1/runs/bd7cebb0-9176-4842-b43a-764bec752f08'
> 14:05:48  I0306 14:05:48.796839 27366 containerizer.cpp:938] Starting 
> container bd7cebb0-9176-4842-b43a-764bec752f08 for executor 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.796856 27361 slave.cpp:1987] Queued task 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' for executor 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.796882 27361 slave.cpp:868] Successfully attached 
> file 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9/slaves/cb9586dc-a080-41eb-b5b8-88274f84a20a-S0/frameworks/cb9586dc-a080-41eb-b5b8-88274f84a20a-0000/executors/44672e61-7a65-4d52-a2cc-07716edc99e1/runs/bd7cebb0-9176-4842-b43a-764bec752f08'
> 14:05:48  I0306 14:05:48.797106 27364 leveldb.cpp:341] Persisting action (18 
> bytes) to leveldb took 2.576545ms
> 14:05:48  I0306 14:05:48.797159 27364 leveldb.cpp:399] Deleting ~2 keys from 
> leveldb took 20966ns
> 14:05:48  I0306 14:05:48.797181 27364 replica.cpp:708] Persisted action 
> TRUNCATE at position 4
> 14:05:48  I0306 14:05:48.797240 27366 cgroups.cpp:405] Creating cgroup at 
> '/sys/fs/cgroup/cpu/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d/bd7cebb0-9176-4842-b43a-764bec752f08'
>  for container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.798014 27366 cgroups.cpp:468] Chown the cgroup at 
> '/sys/fs/cgroup/cpu/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d/bd7cebb0-9176-4842-b43a-764bec752f08'
>  to user 'root' for container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.798056 27366 cgroups.cpp:405] Creating cgroup at 
> '/sys/fs/cgroup/cpuacct/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d/bd7cebb0-9176-4842-b43a-764bec752f08'
>  for container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.798708 27366 cgroups.cpp:468] Chown the cgroup at 
> '/sys/fs/cgroup/cpuacct/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d/bd7cebb0-9176-4842-b43a-764bec752f08'
>  to user 'root' for container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.799271 27362 cpu.cpp:101] Updated 'cpu.shares' to 
> 614 (cpus 0.6) for container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.801187 27362 cpu.cpp:121] Updated 
> 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 60ms (cpus 0.6) for 
> container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.801741 27365 containerizer.cpp:1463] Launching 
> 'mesos-containerizer' with flags 
> '--command="{"arguments":["mesos-executor","--launcher_dir=\/home\/ubuntu\/workspace\/mesos\/Mesos_CI-build\/FLAG\/Plain\/label\/mesos-ec2-ubuntu-14.04\/mesos\/build\/src"],"shell":false,"value":"\/home\/ubuntu\/workspace\/mesos\/Mesos_CI-build\/FLAG\/Plain\/label\/mesos-ec2-ubuntu-14.04\/mesos\/build\/src\/mesos-executor"}"
>  --help="false" --pipe_read="32" --pipe_write="35" --pre_exec_commands="[]" 
> --runtime_directory="/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_lZtCNG/containers/bd7cebb0-9176-4842-b43a-764bec752f08"
>  --unshare_namespace_mnt="false" --user="root" 
> --working_directory="/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9/slaves/cb9586dc-a080-41eb-b5b8-88274f84a20a-S0/frameworks/cb9586dc-a080-41eb-b5b8-88274f84a20a-0000/executors/44672e61-7a65-4d52-a2cc-07716edc99e1/runs/bd7cebb0-9176-4842-b43a-764bec752f08"'
> 14:05:48  I0306 14:05:48.801892 27359 linux_launcher.cpp:421] Launching 
> container bd7cebb0-9176-4842-b43a-764bec752f08 and cloning with namespaces 
> 14:05:48  I0306 14:05:48.867166 14470 exec.cpp:162] Version: 1.1.1
> 14:05:48  I0306 14:05:48.867635 27360 slave.cpp:3231] Got registration for 
> executor '44672e61-7a65-4d52-a2cc-07716edc99e1' of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 from 
> executor(1)@10.179.217.143:57400
> 14:05:48  I0306 14:05:48.868098 14476 exec.cpp:237] Executor registered on 
> agent cb9586dc-a080-41eb-b5b8-88274f84a20a-S0
> 14:05:48  I0306 14:05:48.868496 27366 cpu.cpp:101] Updated 'cpu.shares' to 
> 614 (cpus 0.6) for container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.869145 27366 cpu.cpp:121] Updated 
> 'cpu.cfs_period_us' to 100ms and 'cpu.cfs_quota_us' to 60ms (cpus 0.6) for 
> container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  Received SUBSCRIBED event
> 14:05:48  Subscribed executor on ip-10-179-217-143.ec2.internal
> 14:05:48  I0306 14:05:48.869428 27366 slave.cpp:2191] Sending queued task 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' to executor 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 at executor(1)@10.179.217.143:57400
> 14:05:48  Received LAUNCH event
> 14:05:48  Starting task 44672e61-7a65-4d52-a2cc-07716edc99e1
> 14:05:48  
> /home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/Plain/label/mesos-ec2-ubuntu-14.04/mesos/build/src/mesos-containerizer
>  launch --command="{"shell":true,"value":"cat \/dev\/urandom > \/dev\/null & 
> export MESOS_TEST_PID=$! && sleep 0.5 && kill $MESOS_TEST_PID"}" 
> --help="false" --unshare_namespace_mnt="false"
> 14:05:48  Forked command at 14479
> 14:05:48  I0306 14:05:48.872326 27360 slave.cpp:3634] Handling status update 
> TASK_RUNNING (UUID: 0d6c2530-3b2d-4c65-8187-f641e78edd45) for task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 from 
> executor(1)@10.179.217.143:57400
> 14:05:48  I0306 14:05:48.872912 27363 status_update_manager.cpp:323] Received 
> status update TASK_RUNNING (UUID: 0d6c2530-3b2d-4c65-8187-f641e78edd45) for 
> task 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.872939 27363 status_update_manager.cpp:500] Creating 
> StatusUpdate stream for task 44672e61-7a65-4d52-a2cc-07716edc99e1 of 
> framework cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.873100 27363 status_update_manager.cpp:377] 
> Forwarding update TASK_RUNNING (UUID: 0d6c2530-3b2d-4c65-8187-f641e78edd45) 
> for task 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 to the agent
> 14:05:48  I0306 14:05:48.873208 27360 slave.cpp:4051] Forwarding the update 
> TASK_RUNNING (UUID: 0d6c2530-3b2d-4c65-8187-f641e78edd45) for task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 to master@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.873282 27360 slave.cpp:3945] Status update manager 
> successfully handled status update TASK_RUNNING (UUID: 
> 0d6c2530-3b2d-4c65-8187-f641e78edd45) for task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.873307 27360 slave.cpp:3961] Sending acknowledgement 
> for status update TASK_RUNNING (UUID: 0d6c2530-3b2d-4c65-8187-f641e78edd45) 
> for task 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 to executor(1)@10.179.217.143:57400
> 14:05:48  I0306 14:05:48.873348 27361 master.cpp:5717] Status update 
> TASK_RUNNING (UUID: 0d6c2530-3b2d-4c65-8187-f641e78edd45) for task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 from agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 at slave(529)@10.179.217.143:53643 
> (ip-10-179-217-143.ec2.internal)
> 14:05:48  I0306 14:05:48.873384 27361 master.cpp:5779] Forwarding status 
> update TASK_RUNNING (UUID: 0d6c2530-3b2d-4c65-8187-f641e78edd45) for task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.873446 27361 master.cpp:7650] Updating the state of 
> task 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 (latest state: TASK_RUNNING, status 
> update state: TASK_RUNNING)
> 14:05:48  I0306 14:05:48.873560 27362 sched.cpp:1025] Scheduler::statusUpdate 
> took 49008ns
> 14:05:48  I0306 14:05:48.873677 27362 master.cpp:4829] Processing ACKNOWLEDGE 
> call 0d6c2530-3b2d-4c65-8187-f641e78edd45 for task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 (default) at 
> scheduler-05d2b6b3-3a4c-449d-a85c-2580da6bf6a8@10.179.217.143:53643 on agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0
> 14:05:48  I0306 14:05:48.873826 27365 status_update_manager.cpp:395] Received 
> status update acknowledgement (UUID: 0d6c2530-3b2d-4c65-8187-f641e78edd45) 
> for task 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.873893 27365 slave.cpp:2958] Status update manager 
> successfully handled status update acknowledgement (UUID: 
> 0d6c2530-3b2d-4c65-8187-f641e78edd45) for task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  ../../src/tests/containerizer/cgroups_isolator_tests.cpp:438: 
> Failure
> 14:05:48  Expected: (0.05) <= (cpuTime), actual: 0.05 vs 0.04
> 14:05:48  I0306 14:05:48.875196 27340 sched.cpp:1995] Asked to stop the driver
> 14:05:48  I0306 14:05:48.875253 27366 sched.cpp:1187] Stopping framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.875341 27361 master.cpp:7093] Processing TEARDOWN 
> call for framework cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 (default) at 
> scheduler-05d2b6b3-3a4c-449d-a85c-2580da6bf6a8@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.875366 27361 master.cpp:7105] Removing framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 (default) at 
> scheduler-05d2b6b3-3a4c-449d-a85c-2580da6bf6a8@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.875432 27361 master.cpp:7650] Updating the state of 
> task 44672e61-7a65-4d52-a2cc-07716edc99e1 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 (latest state: TASK_KILLED, status 
> update state: TASK_KILLED)
> 14:05:48  I0306 14:05:48.875449 27362 slave.cpp:2511] Asked to shut down 
> framework cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 by 
> master@10.179.217.143:53643
> 14:05:48  I0306 14:05:48.875470 27362 slave.cpp:2536] Shutting down framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.875452 27359 hierarchical.cpp:386] Deactivated 
> framework cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.875484 27362 slave.cpp:4860] Shutting down executor 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 at executor(1)@10.179.217.143:57400
> 14:05:48  I0306 14:05:48.875561 27359 hierarchical.cpp:1018] Recovered 
> cpus(*):0.5 (total: cpus(*):2; mem(*):1024; disk(*):1024; 
> ports(*):[31000-32000], allocated: {}) on agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 from framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.875558 27361 master.cpp:7746] Removing task 
> 44672e61-7a65-4d52-a2cc-07716edc99e1 with resources cpus(*):0.5 of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 on agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 at slave(529)@10.179.217.143:53643 
> (ip-10-179-217-143.ec2.internal)
> 14:05:48  I0306 14:05:48.875671 14474 exec.cpp:414] Executor asked to shutdown
> 14:05:48  Received SHUTDOWN event
> 14:05:48  Shutting down
> 14:05:48  Sending SIGTERM to process tree at pid 14479
> 14:05:48  I0306 14:05:48.875804 27363 containerizer.cpp:1961] Destroying 
> container bd7cebb0-9176-4842-b43a-764bec752f08 in RUNNING state
> 14:05:48  I0306 14:05:48.875821 27366 hierarchical.cpp:337] Removed framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:48  I0306 14:05:48.876000 27363 linux_launcher.cpp:498] Asked to 
> destroy container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.876375 27363 linux_launcher.cpp:541] Using freezer 
> to destroy cgroup 
> mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d/bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.877136 27363 cgroups.cpp:2705] Freezing cgroup 
> /sys/fs/cgroup/freezer/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d/bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.878432 27361 cgroups.cpp:1439] Successfully froze 
> cgroup 
> /sys/fs/cgroup/freezer/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d/bd7cebb0-9176-4842-b43a-764bec752f08
>  after 1.268736ms
> 14:05:48  I0306 14:05:48.879509 27359 cgroups.cpp:2723] Thawing cgroup 
> /sys/fs/cgroup/freezer/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d/bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:48  I0306 14:05:48.880514 27363 cgroups.cpp:1468] Successfully thawed 
> cgroup 
> /sys/fs/cgroup/freezer/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d/bd7cebb0-9176-4842-b43a-764bec752f08
>  after 980992ns
> 14:05:48  I0306 14:05:48.881316 27361 slave.cpp:4179] Got exited event for 
> executor(1)@10.179.217.143:57400
> 14:05:49  I0306 14:05:48.974081 27362 containerizer.cpp:2324] Container 
> bd7cebb0-9176-4842-b43a-764bec752f08 has exited
> 14:05:49  I0306 14:05:48.977591 27365 provisioner.cpp:324] Ignoring destroy 
> request for unknown container bd7cebb0-9176-4842-b43a-764bec752f08
> 14:05:49  I0306 14:05:48.977797 27363 slave.cpp:4542] Executor 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 terminated with signal Killed
> 14:05:49  I0306 14:05:48.977820 27363 slave.cpp:4646] Cleaning up executor 
> '44672e61-7a65-4d52-a2cc-07716edc99e1' of framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000 at executor(1)@10.179.217.143:57400
> 14:05:49  I0306 14:05:48.977952 27366 gc.cpp:55] Scheduling 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9/slaves/cb9586dc-a080-41eb-b5b8-88274f84a20a-S0/frameworks/cb9586dc-a080-41eb-b5b8-88274f84a20a-0000/executors/44672e61-7a65-4d52-a2cc-07716edc99e1/runs/bd7cebb0-9176-4842-b43a-764bec752f08'
>  for gc 6.99998868188741days in the future
> 14:05:49  I0306 14:05:48.977972 27363 slave.cpp:4734] Cleaning up framework 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:49  I0306 14:05:48.978001 27366 gc.cpp:55] Scheduling 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9/slaves/cb9586dc-a080-41eb-b5b8-88274f84a20a-S0/frameworks/cb9586dc-a080-41eb-b5b8-88274f84a20a-0000/executors/44672e61-7a65-4d52-a2cc-07716edc99e1'
>  for gc 6.9999886810163days in the future
> 14:05:49  I0306 14:05:48.978030 27365 status_update_manager.cpp:285] Closing 
> status update streams for framework cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:49  I0306 14:05:48.978080 27365 status_update_manager.cpp:531] Cleaning 
> up status update stream for task 44672e61-7a65-4d52-a2cc-07716edc99e1 of 
> framework cb9586dc-a080-41eb-b5b8-88274f84a20a-0000
> 14:05:49  I0306 14:05:48.978098 27366 gc.cpp:55] Scheduling 
> '/tmp/CgroupsIsolatorTest_ROOT_CGROUPS_CFS_EnableCfs_22knY9/slaves/cb9586dc-a080-41eb-b5b8-88274f84a20a-S0/frameworks/cb9586dc-a080-41eb-b5b8-88274f84a20a-0000'
>  for gc 6.9999886801837days in the future
> 14:05:49  I0306 14:05:48.978276 27363 slave.cpp:787] Agent terminating
> 14:05:49  I0306 14:05:48.978350 27363 master.cpp:1258] Agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 at slave(529)@10.179.217.143:53643 
> (ip-10-179-217-143.ec2.internal) disconnected
> 14:05:49  I0306 14:05:48.978369 27363 master.cpp:2963] Disconnecting agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 at slave(529)@10.179.217.143:53643 
> (ip-10-179-217-143.ec2.internal)
> 14:05:49  I0306 14:05:48.978411 27363 master.cpp:2982] Deactivating agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 at slave(529)@10.179.217.143:53643 
> (ip-10-179-217-143.ec2.internal)
> 14:05:49  I0306 14:05:48.978484 27361 hierarchical.cpp:584] Agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0 deactivated
> 14:05:49  I0306 14:05:48.981438 27340 master.cpp:1097] Master terminating
> 14:05:49  I0306 14:05:48.981596 27365 hierarchical.cpp:517] Removed agent 
> cb9586dc-a080-41eb-b5b8-88274f84a20a-S0
> 14:05:49  I0306 14:05:48.984863 27361 cgroups.cpp:2705] Freezing cgroup 
> /sys/fs/cgroup/freezer/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d
> 14:05:49  I0306 14:05:48.985785 27364 cgroups.cpp:1439] Successfully froze 
> cgroup /sys/fs/cgroup/freezer/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d 
> after 894976ns
> 14:05:49  I0306 14:05:48.986660 27364 cgroups.cpp:2723] Thawing cgroup 
> /sys/fs/cgroup/freezer/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d
> 14:05:49  I0306 14:05:48.987552 27363 cgroups.cpp:1468] Successfully thawed 
> cgroup /sys/fs/cgroup/freezer/mesos_test_ab04bf04-20c2-45fc-8326-3f5a4618a16d 
> after 867328ns
> 14:05:49  [  FAILED  ] CgroupsIsolatorTest.ROOT_CGROUPS_CFS_EnableCfs (302 ms)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to