[ 
https://issues.apache.org/jira/browse/MESOS-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677505#comment-16677505
 ] 

Joseph Wu commented on MESOS-9285:
----------------------------------

Observed on another internal CI run, albeit on CentOS7 and on the 1.5.x branch 
(https://github.com/apache/mesos/tree/6008868c715733b7d798279e9b39ae3483f7d955)
{code}
[ RUN      ] 
DockerVolumeIsolatorTest.ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume
I1106 20:21:29.367887 31384 cluster.cpp:172] Creating default 'local' authorizer
I1106 20:21:29.368988 21440 master.cpp:457] Master 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1 (ip-172-16-10-72.ec2.internal) started on 
172.16.10.72:46670
I1106 20:21:29.369009 21440 master.cpp:459] Flags at startup: --acls="" 
--agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" 
--allocation_interval="1secs" --allocator="hierarchical" 
--authenticate_agents="true" --authenticate_frameworks="true" 
--authenticate_http_frameworks="true" --authenticate_http_readonly="true" 
--authenticate_http_readwrite="true" --authentication_v0_timeout="15secs" 
--authenticators="crammd5" --authorizers="local" 
--credentials="/tmp/zLJ6wA/credentials" --filter_gpu_resources="true" 
--framework_sorter="drf" --help="false" --hostname_lookup="true" 
--http_authenticators="basic" --http_framework_authenticators="basic" 
--initialize_driver_logging="true" --log_auto_initialize="true" 
--logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" 
--max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" 
--max_unreachable_tasks_per_framework="1000" 
--min_allocatable_resources="cpus:0.01|mem:32" --port="5050" --quiet="false" 
--recovery_agent_removal_limit="100%" --registry="in_memory" 
--registry_fetch_timeout="1mins" --registry_gc_interval="15mins" 
--registry_max_agent_age="2weeks" --registry_max_agent_count="102400" 
--registry_store_timeout="100secs" --registry_strict="false" 
--require_agent_domain="false" --role_sorter="drf" --root_submissions="true" 
--version="false" --webui_dir="/usr/local/share/mesos/webui" 
--work_dir="/tmp/zLJ6wA/master" --zk_session_timeout="10secs"
I1106 20:21:29.369148 21440 master.cpp:508] Master only allowing authenticated 
frameworks to register
I1106 20:21:29.369156 21440 master.cpp:514] Master only allowing authenticated 
agents to register
I1106 20:21:29.369163 21440 master.cpp:520] Master only allowing authenticated 
HTTP frameworks to register
I1106 20:21:29.369168 21440 credentials.hpp:37] Loading credentials for 
authentication from '/tmp/zLJ6wA/credentials'
I1106 20:21:29.369271 21440 master.cpp:564] Using default 'crammd5' 
authenticator
I1106 20:21:29.369320 21440 http.cpp:1045] Creating default 'basic' HTTP 
authenticator for realm 'mesos-master-readonly'
I1106 20:21:29.369357 21440 http.cpp:1045] Creating default 'basic' HTTP 
authenticator for realm 'mesos-master-readwrite'
I1106 20:21:29.369383 21440 http.cpp:1045] Creating default 'basic' HTTP 
authenticator for realm 'mesos-master-scheduler'
I1106 20:21:29.369403 21440 master.cpp:643] Authorization enabled
I1106 20:21:29.369576 21438 hierarchical.cpp:177] Initialized hierarchical 
allocator process
I1106 20:21:29.369608 21438 whitelist_watcher.cpp:77] No whitelist given
I1106 20:21:29.370110 21440 master.cpp:2247] Elected as the leading master!
I1106 20:21:29.370126 21440 master.cpp:1727] Recovering from registrar
I1106 20:21:29.370163 21440 registrar.cpp:347] Recovering registrar
I1106 20:21:29.370302 21437 registrar.cpp:391] Successfully fetched the 
registry (0B) in 114944ns
I1106 20:21:29.370345 21437 registrar.cpp:495] Applied 1 operations in 6941ns; 
attempting to update the registry
I1106 20:21:29.370510 21442 registrar.cpp:552] Successfully updated the 
registry in 143104ns
I1106 20:21:29.370553 21442 registrar.cpp:424] Successfully recovered registrar
I1106 20:21:29.370631 21442 master.cpp:1840] Recovered 0 agents from the 
registry (172B); allowing 10mins for agents to re-register
I1106 20:21:29.370667 21438 hierarchical.cpp:215] Skipping recovery of 
hierarchical allocator: nothing to recover
I1106 20:21:29.372022 31384 isolator.cpp:136] Initialized the docker volume 
information root directory at '/run/mesos/isolators/docker/volume'
I1106 20:21:29.379148 31384 linux_launcher.cpp:145] Using 
/sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
sh: hadoop: command not found
I1106 20:21:29.462009 31384 fetcher.cpp:69] Skipping URI fetcher plugin 
'hadoop' as it could not be created: Failed to create HDFS client: Hadoop 
client is not available, exit status: 32512
I1106 20:21:29.462098 31384 registry_puller.cpp:129] Creating registry puller 
with docker registry 'https://registry-1.docker.io'
I1106 20:21:29.462872 31384 provisioner.cpp:299] Using default backend 'copy'
W1106 20:21:29.464048 31384 process.cpp:2745] Attempted to spawn already 
running process files@172.16.10.72:46670
I1106 20:21:29.464135 31384 cluster.cpp:460] Creating default 'local' authorizer
I1106 20:21:29.464778 21440 slave.cpp:261] Mesos agent started on 
(722)@172.16.10.72:46670
W1106 20:21:29.464897 31384 process.cpp:2745] Attempted to spawn already 
running process version@172.16.10.72:46670
I1106 20:21:29.464797 21440 slave.cpp:262] Flags at startup: --acls="" 
--appc_simple_discovery_uri_prefix="http://"; 
--appc_store_dir="/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/appc"
 --authenticate_http_readonly="true" --authenticate_http_readwrite="false" 
--authenticatee="crammd5" --authentication_backoff_factor="1secs" 
--authentication_timeout_max="1mins" --authentication_timeout_min="5secs" 
--authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" 
--cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" 
--cgroups_limit_swap="false" --cgroups_root="mesos" 
--container_disk_watch_interval="15secs" --containerizers="mesos" 
--credential="/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/credential"
 --default_role="*" --disallow_sharing_agent_pid_namespace="false" 
--disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" 
--docker_registry="https://registry-1.docker.io"; --docker_remove_delay="6hrs" 
--docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" 
--docker_store_dir="/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/docker"
 --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" 
--enforce_container_disk_quota="false" --executor_registration_timeout="1mins" 
--executor_reregistration_timeout="2secs" 
--executor_shutdown_grace_period="5secs" 
--fetcher_cache_dir="/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/fetch"
 --fetcher_cache_size="2GB" --fetcher_stall_timeout="1mins" 
--frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" 
--hadoop_home="" --help="false" --hostname_lookup="true" 
--http_command_executor="false" 
--http_credentials="/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/http_credentials"
 --http_heartbeat_interval="30secs" --image_providers="docker" 
--initialize_driver_logging="true" 
--isolation="docker/volume,docker/runtime,filesystem/linux" --launcher="linux" 
--launcher_dir="/home/centos/workspace/mesos/Mesos_CI-build/FLAG/Plain/label/mesos-ec2-centos-7/mesos/build/src"
 --logbufsecs="0" --logging_level="INFO" 
--max_completed_executors_per_framework="150" 
--oversubscribed_resources_interval="15secs" --perf_duration="10secs" 
--perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" 
--quiet="false" --reconfiguration_policy="equal" --recover="reconnect" 
--recovery_timeout="15mins" --registration_backoff_factor="10ms" 
--resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]" 
--revocable_cpu_low_priority="true" 
--runtime_dir="/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo"
 --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" 
--systemd_enable_support="true" 
--systemd_runtime_directory="/run/systemd/system" --version="false" 
--work_dir="/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ"
 --zk_session_timeout="10secs"
I1106 20:21:29.465073 21440 credentials.hpp:86] Loading credential for 
authentication from 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/credential'
I1106 20:21:29.465206 21440 slave.cpp:294] Agent using credential for: 
test-principal
I1106 20:21:29.465222 21440 credentials.hpp:37] Loading credentials for 
authentication from 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/http_credentials'
I1106 20:21:29.465452 21440 http.cpp:1045] Creating default 'basic' HTTP 
authenticator for realm 'mesos-agent-readonly'
I1106 20:21:29.466080 21440 slave.cpp:611] Agent resources: 
[{"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"ports","ranges":{"range":[{"begin":31000,"end":32000}]},"type":"RANGES"}]
I1106 20:21:29.466156 21440 slave.cpp:619] Agent attributes: [  ]
I1106 20:21:29.466163 21440 slave.cpp:628] Agent hostname: 
ip-172-16-10-72.ec2.internal
I1106 20:21:29.466503 21440 task_status_update_manager.cpp:181] Pausing sending 
task status updates
I1106 20:21:29.466538 31384 sched.cpp:232] Version: 1.5.2
I1106 20:21:29.466694 21440 state.cpp:66] Recovering state from 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/meta'
I1106 20:21:29.466850 21440 task_status_update_manager.cpp:207] Recovering task 
status update manager
I1106 20:21:29.466890 21440 sched.cpp:336] New master detected at 
master@172.16.10.72:46670
I1106 20:21:29.466915 21440 sched.cpp:401] Authenticating with master 
master@172.16.10.72:46670
I1106 20:21:29.466925 21440 sched.cpp:408] Using default CRAM-MD5 authenticatee
I1106 20:21:29.467027 21440 authenticatee.cpp:121] Creating new client SASL 
connection
I1106 20:21:29.467097 21440 containerizer.cpp:674] Recovering containerizer
I1106 20:21:29.467175 21440 master.cpp:9059] Authenticating 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670
I1106 20:21:29.467571 21441 authenticator.cpp:414] Starting authentication 
session for crammd5-authenticatee(1364)@172.16.10.72:46670
I1106 20:21:29.467640 21441 authenticator.cpp:98] Creating new server SASL 
connection
I1106 20:21:29.467694 21441 authenticatee.cpp:213] Received SASL authentication 
mechanisms: CRAM-MD5
I1106 20:21:29.467706 21441 authenticatee.cpp:239] Attempting to authenticate 
with mechanism 'CRAM-MD5'
I1106 20:21:29.467731 21441 authenticator.cpp:204] Received SASL authentication 
start
I1106 20:21:29.467764 21441 authenticator.cpp:326] Authentication requires more 
steps
I1106 20:21:29.467797 21441 authenticatee.cpp:259] Received SASL authentication 
step
I1106 20:21:29.467830 21441 authenticator.cpp:232] Received SASL authentication 
step
I1106 20:21:29.467844 21441 auxprop.cpp:109] Request to lookup properties for 
user: 'test-principal' realm: 'ip-172-16-10-72.ec2.internal' server FQDN: 
'ip-172-16-10-72.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false 
SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
I1106 20:21:29.467852 21441 auxprop.cpp:181] Looking up auxiliary property 
'*userPassword'
I1106 20:21:29.467864 21441 auxprop.cpp:181] Looking up auxiliary property 
'*cmusaslsecretCRAM-MD5'
I1106 20:21:29.467873 21441 auxprop.cpp:109] Request to lookup properties for 
user: 'test-principal' realm: 'ip-172-16-10-72.ec2.internal' server FQDN: 
'ip-172-16-10-72.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false 
SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
I1106 20:21:29.467880 21441 auxprop.cpp:131] Skipping auxiliary property 
'*userPassword' since SASL_AUXPROP_AUTHZID == true
I1106 20:21:29.467886 21441 auxprop.cpp:131] Skipping auxiliary property 
'*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
I1106 20:21:29.467900 21441 authenticator.cpp:318] Authentication success
I1106 20:21:29.467931 21441 authenticatee.cpp:299] Authentication success
I1106 20:21:29.467963 21441 master.cpp:9091] Successfully authenticated 
principal 'test-principal' at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670
I1106 20:21:29.467985 21441 authenticator.cpp:432] Authentication session 
cleanup for crammd5-authenticatee(1364)@172.16.10.72:46670
I1106 20:21:29.468037 21441 sched.cpp:514] Successfully authenticated with 
master master@172.16.10.72:46670
I1106 20:21:29.468047 21441 sched.cpp:819] Sending SUBSCRIBE call to 
master@172.16.10.72:46670
I1106 20:21:29.468084 21441 sched.cpp:852] Will retry registration in 
1.563145862secs if necessary
I1106 20:21:29.468178 21441 master.cpp:3002] Received SUBSCRIBE call for 
framework 'default' at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670
I1106 20:21:29.468199 21441 master.cpp:2319] Authorizing framework principal 
'test-principal' to receive offers for roles '{ * }'
I1106 20:21:29.468309 21440 master.cpp:3082] Subscribing framework default with 
checkpointing disabled and capabilities [ MULTI_ROLE, RESERVATION_REFINEMENT ]
I1106 20:21:29.468355 21440 master.cpp:9289] Adding framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670 with roles {  
} suppressed
I1106 20:21:29.468565 21440 hierarchical.cpp:299] Added framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:29.468611 21440 hierarchical.cpp:1520] Performed allocation for 0 
agents in 12122ns
I1106 20:21:29.468690 21440 sched.cpp:746] Framework registered with 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:29.468716 21440 sched.cpp:760] Scheduler::registered took 15889ns
I1106 20:21:29.468971 21441 metadata_manager.cpp:254] No images to load from 
disk. Docker provisioner image storage path 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/docker/storedImages'
 does not exist
I1106 20:21:29.469045 21438 provisioner.cpp:495] Provisioner recovery complete
I1106 20:21:29.469158 21438 slave.cpp:7330] Finished recovery
I1106 20:21:29.469563 21437 slave.cpp:1262] New master detected at 
master@172.16.10.72:46670
I1106 20:21:29.469568 21442 task_status_update_manager.cpp:181] Pausing sending 
task status updates
I1106 20:21:29.469611 21437 slave.cpp:1327] Detecting new master
I1106 20:21:29.470882 21440 slave.cpp:1354] Authenticating with master 
master@172.16.10.72:46670
I1106 20:21:29.470909 21440 slave.cpp:1363] Using default CRAM-MD5 authenticatee
I1106 20:21:29.470970 21440 authenticatee.cpp:121] Creating new client SASL 
connection
I1106 20:21:29.471051 21437 master.cpp:9059] Authenticating 
slave(722)@172.16.10.72:46670
I1106 20:21:29.471103 21437 authenticator.cpp:414] Starting authentication 
session for crammd5-authenticatee(1365)@172.16.10.72:46670
I1106 20:21:29.471168 21438 authenticator.cpp:98] Creating new server SASL 
connection
I1106 20:21:29.471228 21438 authenticatee.cpp:213] Received SASL authentication 
mechanisms: CRAM-MD5
I1106 20:21:29.471246 21438 authenticatee.cpp:239] Attempting to authenticate 
with mechanism 'CRAM-MD5'
I1106 20:21:29.471276 21438 authenticator.cpp:204] Received SASL authentication 
start
I1106 20:21:29.471300 21438 authenticator.cpp:326] Authentication requires more 
steps
I1106 20:21:29.471330 21438 authenticatee.cpp:259] Received SASL authentication 
step
I1106 20:21:29.471365 21438 authenticator.cpp:232] Received SASL authentication 
step
I1106 20:21:29.471379 21438 auxprop.cpp:109] Request to lookup properties for 
user: 'test-principal' realm: 'ip-172-16-10-72.ec2.internal' server FQDN: 
'ip-172-16-10-72.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false 
SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
I1106 20:21:29.471384 21438 auxprop.cpp:181] Looking up auxiliary property 
'*userPassword'
I1106 20:21:29.471390 21438 auxprop.cpp:181] Looking up auxiliary property 
'*cmusaslsecretCRAM-MD5'
I1106 20:21:29.471398 21438 auxprop.cpp:109] Request to lookup properties for 
user: 'test-principal' realm: 'ip-172-16-10-72.ec2.internal' server FQDN: 
'ip-172-16-10-72.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false 
SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
I1106 20:21:29.471402 21438 auxprop.cpp:131] Skipping auxiliary property 
'*userPassword' since SASL_AUXPROP_AUTHZID == true
I1106 20:21:29.471406 21438 auxprop.cpp:131] Skipping auxiliary property 
'*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
I1106 20:21:29.471415 21438 authenticator.cpp:318] Authentication success
I1106 20:21:29.471455 21440 authenticatee.cpp:299] Authentication success
I1106 20:21:29.471542 21440 slave.cpp:1454] Successfully authenticated with 
master master@172.16.10.72:46670
I1106 20:21:29.471633 21440 slave.cpp:1884] Will retry registration in 
13.973559ms if necessary
I1106 20:21:29.471829 21444 authenticator.cpp:432] Authentication session 
cleanup for crammd5-authenticatee(1365)@172.16.10.72:46670
I1106 20:21:29.471904 21438 master.cpp:9091] Successfully authenticated 
principal 'test-principal' at slave(722)@172.16.10.72:46670
I1106 20:21:29.471966 21438 master.cpp:6146] Received register agent message 
from slave(722)@172.16.10.72:46670 (ip-172-16-10-72.ec2.internal)
I1106 20:21:29.471999 21438 master.cpp:3911] Authorizing agent with principal 
'test-principal'
I1106 20:21:29.472064 21438 master.cpp:6204] Authorized registration of agent 
at slave(722)@172.16.10.72:46670 (ip-172-16-10-72.ec2.internal)
I1106 20:21:29.472090 21438 master.cpp:6315] Registering agent at 
slave(722)@172.16.10.72:46670 (ip-172-16-10-72.ec2.internal) with id 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0
I1106 20:21:29.472190 21438 registrar.cpp:495] Applied 1 operations in 32728ns; 
attempting to update the registry
I1106 20:21:29.472384 21443 registrar.cpp:552] Successfully updated the 
registry in 129792ns
I1106 20:21:29.472451 21443 master.cpp:6363] Admitted agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal)
I1106 20:21:29.472618 21443 master.cpp:6412] Registered agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal) with cpus:2; mem:1024; disk:1024; 
ports:[31000-32000]
I1106 20:21:29.472676 21443 slave.cpp:1486] Registered with master 
master@172.16.10.72:46670; given agent ID 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0
I1106 20:21:29.472703 21438 hierarchical.cpp:576] Added agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 (ip-172-16-10-72.ec2.internal) with 
cpus:2; mem:1024; disk:1024; ports:[31000-32000] (allocated: {})
I1106 20:21:29.472913 21443 slave.cpp:1506] Checkpointing SlaveInfo to 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/meta/slaves/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0/slave.info'
I1106 20:21:29.472950 21438 hierarchical.cpp:1520] Performed allocation for 1 
agents in 197170ns
I1106 20:21:29.472975 21438 task_status_update_manager.cpp:188] Resuming 
sending task status updates
I1106 20:21:29.473091 21438 master.cpp:8893] Sending 1 offers to framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670
I1106 20:21:29.473145 21443 slave.cpp:1553] Forwarding agent update 
{"operations":{},"resource_version_uuid":{"value":"KMeWAvS4Tq2yo0WSk82A2A=="},"slave_id":{"value":"0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0"},"update_oversubscribed_resources":true}
I1106 20:21:29.473238 21438 sched.cpp:916] Scheduler::resourceOffers took 
30982ns
I1106 20:21:29.473285 21443 master.cpp:7339] Received update of agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal) with total oversubscribed resources {}
I1106 20:21:29.473333 21443 master.cpp:7433] Ignoring update on agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal) as it reports no changes
I1106 20:21:29.473968 21438 master.cpp:10830] Removing offer 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-O0
I1106 20:21:29.474073 21438 master.cpp:4330] Processing ACCEPT call for offers: 
[ 0d219c14-565e-46dd-b5c2-56bd9e97e4d1-O0 ] on agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal) for framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670
I1106 20:21:29.474117 21438 master.cpp:3638] Authorizing framework principal 
'test-principal' to launch task 366bc261-1790-4ef0-8e14-4f108a7832f7
I1106 20:21:29.474678 21438 master.cpp:11588] Adding task 
366bc261-1790-4ef0-8e14-4f108a7832f7 with resources cpus(allocated: *):2; 
mem(allocated: *):1024; disk(allocated: *):1024; ports(allocated: 
*):[31000-32000] on agent 0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at 
slave(722)@172.16.10.72:46670 (ip-172-16-10-72.ec2.internal)
I1106 20:21:29.474783 21438 master.cpp:5090] Launching task 
366bc261-1790-4ef0-8e14-4f108a7832f7 of framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670 with 
resources 
[{"allocation_info":{"role":"*"},"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{"allocation_info":{"role":"*"},"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"allocation_info":{"role":"*"},"name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"allocation_info":{"role":"*"},"name":"ports","ranges":{"range":[{"begin":31000,"end":32000}]},"type":"RANGES"}]
 on agent 0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at 
slave(722)@172.16.10.72:46670 (ip-172-16-10-72.ec2.internal) on  new executor
I1106 20:21:29.475131 21438 slave.cpp:2021] Got assigned task 
'366bc261-1790-4ef0-8e14-4f108a7832f7' for framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:29.475420 21438 slave.cpp:2395] Authorizing task 
'366bc261-1790-4ef0-8e14-4f108a7832f7' for framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:29.475455 21438 slave.cpp:8447] Authorizing framework principal 
'test-principal' to launch task 366bc261-1790-4ef0-8e14-4f108a7832f7
I1106 20:21:29.475860 21438 slave.cpp:2869] Launching task 
'366bc261-1790-4ef0-8e14-4f108a7832f7' for framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:29.476258 21438 paths.cpp:756] Trying to chown 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/slaves/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0/frameworks/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000/executors/366bc261-1790-4ef0-8e14-4f108a7832f7/runs/456a2d9d-5bdd-49d3-af3c-c0b397b96c11'
 to user 'root'
I1106 20:21:29.476402 21438 slave.cpp:8938] Launching executor 
'366bc261-1790-4ef0-8e14-4f108a7832f7' of framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 with resources 
[{"allocation_info":{"role":"*"},"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"allocation_info":{"role":"*"},"name":"mem","scalar":{"value":32.0},"type":"SCALAR"}]
 in work directory 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/slaves/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0/frameworks/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000/executors/366bc261-1790-4ef0-8e14-4f108a7832f7/runs/456a2d9d-5bdd-49d3-af3c-c0b397b96c11'
I1106 20:21:29.476641 21438 slave.cpp:3565] Launching container 
456a2d9d-5bdd-49d3-af3c-c0b397b96c11 for executor 
'366bc261-1790-4ef0-8e14-4f108a7832f7' of framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:29.476783 21438 slave.cpp:3085] Queued task 
'366bc261-1790-4ef0-8e14-4f108a7832f7' for executor 
'366bc261-1790-4ef0-8e14-4f108a7832f7' of framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:29.476824 21438 slave.cpp:996] Successfully attached 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/slaves/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0/frameworks/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000/executors/366bc261-1790-4ef0-8e14-4f108a7832f7/runs/456a2d9d-5bdd-49d3-af3c-c0b397b96c11'
 to virtual path 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/slaves/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0/frameworks/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000/executors/366bc261-1790-4ef0-8e14-4f108a7832f7/runs/latest'
I1106 20:21:29.476842 21438 slave.cpp:996] Successfully attached 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/slaves/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0/frameworks/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000/executors/366bc261-1790-4ef0-8e14-4f108a7832f7/runs/456a2d9d-5bdd-49d3-af3c-c0b397b96c11'
 to virtual path 
'/frameworks/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000/executors/366bc261-1790-4ef0-8e14-4f108a7832f7/runs/latest'
I1106 20:21:29.476856 21438 slave.cpp:996] Successfully attached 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/slaves/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0/frameworks/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000/executors/366bc261-1790-4ef0-8e14-4f108a7832f7/runs/456a2d9d-5bdd-49d3-af3c-c0b397b96c11'
 to virtual path 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/slaves/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0/frameworks/0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000/executors/366bc261-1790-4ef0-8e14-4f108a7832f7/runs/456a2d9d-5bdd-49d3-af3c-c0b397b96c11'
I1106 20:21:29.476943 21438 containerizer.cpp:1209] Starting container 
456a2d9d-5bdd-49d3-af3c-c0b397b96c11
I1106 20:21:29.477269 21439 metadata_manager.cpp:179] Looking for image 'alpine'
I1106 20:21:29.477453 21437 registry_puller.cpp:287] Pulling image 
'library/alpine' from 
'docker-manifest://registry-1.docker.io:443library/alpine?latest#https' to 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/docker/staging/TgBElC'
I1106 20:21:30.169728 21438 registry_puller.cpp:315] The manifest for image 
'library/alpine' is '{
   "schemaVersion": 1,
   "name": "library/alpine",
   "tag": "latest",
   "architecture": "amd64",
   "fsLayers": [
      {
         "blobSum": 
"sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
      },
      {
         "blobSum": 
"sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde"
      }
   ],
   "history": [
      {
         "v1Compatibility": 
"{\"architecture\":\"amd64\",\"config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[\"/bin/sh\"],\"ArgsEscaped\":true,\"Image\":\"sha256:836dc9b148a584f3f42ac645c7d2bfa11423c1a07a054446314e11259f0e59b7\",\"Volumes\":null,\"WorkingDir\":\"\",\"Entrypoint\":null,\"OnBuild\":null,\"Labels\":null},\"container\":\"cdce86c23e71b41aab0cdb15c010945deadaf59b296f216b0f9f43c331730049\",\"container_config\":{\"Hostname\":\"cdce86c23e71\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[\"/bin/sh\",\"-c\",\"#(nop)
 \",\"CMD 
[\\\"/bin/sh\\\"]\"],\"ArgsEscaped\":true,\"Image\":\"sha256:836dc9b148a584f3f42ac645c7d2bfa11423c1a07a054446314e11259f0e59b7\",\"Volumes\":null,\"WorkingDir\":\"\",\"Entrypoint\":null,\"OnBuild\":null,\"Labels\":{}},\"created\":\"2018-09-11T22:19:50.322783064Z\",\"docker_version\":\"17.06.2-ce\",\"id\":\"f33084b4b6032f9c77744a165174368fa4aed696ac301bfe504fa5533121690f\",\"os\":\"linux\",\"parent\":\"70034ea7817b10a77d60705f61423abcc61b8bb299571494ddbfd4f1f83f46ee\",\"throwaway\":true}"
      },
      {
         "v1Compatibility": 
"{\"id\":\"70034ea7817b10a77d60705f61423abcc61b8bb299571494ddbfd4f1f83f46ee\",\"created\":\"2018-09-11T22:19:50.165674013Z\",\"container_config\":{\"Cmd\":[\"/bin/sh
 -c #(nop) ADD 
file:25c10b1d1b41d46a1827ad0b0d2389c24df6d31430005ff4e9a2d84ea23ebd42 in / 
\"]}}"
      }
   ],
   "signatures": [
      {
         "header": {
            "jwk": {
               "crv": "P-256",
               "kid": 
"NPQV:FHTD:WQEA:V2XR:RPJI:6DK5:ZC7Y:SEGL:O5RI:VEXF:MIAS:FKEC",
               "kty": "EC",
               "x": "gp1PmmoywFxTu9y79T_P0y0wzP-q4WgJ9A7bo5n4mzI",
               "y": "l-imWeTzKtRwo_tFUzYIRI2bGbkWgDa395N0G6czsdU"
            },
            "alg": "ES256"
         },
         "signature": 
"5ueNCDAio7idgSFrrOMAkHt2LMBdPlO2m_MCdnUGa4tpwetx3ywX4SmEY8vMVQMf95NoDQCsBsg017HJ7YOkIw",
         "protected": 
"eyJmb3JtYXRMZW5ndGgiOjIxNDEsImZvcm1hdFRhaWwiOiJDbjAiLCJ0aW1lIjoiMjAxOC0xMS0wNlQyMDoyMTozMFoifQ"
      }
   ]
}'
I1106 20:21:30.169798 21438 registry_puller.cpp:461] Fetching blob 
'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4' for 
layer 'f33084b4b6032f9c77744a165174368fa4aed696ac301bfe504fa5533121690f' of 
image 'library/alpine'
I1106 20:21:30.169818 21438 registry_puller.cpp:461] Fetching blob 
'sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde' for 
layer '70034ea7817b10a77d60705f61423abcc61b8bb299571494ddbfd4f1f83f46ee' of 
image 'library/alpine'
I1106 20:21:30.370847 21438 hierarchical.cpp:1520] Performed allocation for 1 
agents in 85823ns
I1106 20:21:31.371953 21443 hierarchical.cpp:1520] Performed allocation for 1 
agents in 68707ns
I1106 20:21:32.372694 21442 hierarchical.cpp:1520] Performed allocation for 1 
agents in 71388ns
I1106 20:21:33.373468 21443 hierarchical.cpp:1520] Performed allocation for 1 
agents in 62842ns
I1106 20:21:34.374572 21437 hierarchical.cpp:1520] Performed allocation for 1 
agents in 61506ns
I1106 20:21:35.375443 21438 hierarchical.cpp:1520] Performed allocation for 1 
agents in 65410ns
I1106 20:21:36.376570 21439 hierarchical.cpp:1520] Performed allocation for 1 
agents in 68584ns
I1106 20:21:37.377621 21441 hierarchical.cpp:1520] Performed allocation for 1 
agents in 71918ns
I1106 20:21:38.378931 21440 hierarchical.cpp:1520] Performed allocation for 1 
agents in 63258ns
I1106 20:21:39.379402 21441 hierarchical.cpp:1520] Performed allocation for 1 
agents in 62683ns
I1106 20:21:40.380681 21440 hierarchical.cpp:1520] Performed allocation for 1 
agents in 62452ns
I1106 20:21:41.381646 21438 hierarchical.cpp:1520] Performed allocation for 1 
agents in 62135ns
I1106 20:21:42.382562 21439 hierarchical.cpp:1520] Performed allocation for 1 
agents in 68173ns
I1106 20:21:43.383781 21442 hierarchical.cpp:1520] Performed allocation for 1 
agents in 61001ns
I1106 20:21:44.384835 21444 hierarchical.cpp:1520] Performed allocation for 1 
agents in 70792ns
../../src/tests/containerizer/docker_volume_isolator_tests.cpp:946: Failure
Failed to wait 15secs for statusStarting
I1106 20:21:44.474404 21440 master.cpp:1457] Framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670 disconnected
I1106 20:21:44.474436 21440 master.cpp:3372] Deactivating framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670
../../src/tests/containerizer/docker_volume_isolator_tests.cpp:939: Failure
Actual function call count doesn't match EXPECT_CALL(sched, 
statusUpdate(&driver, _))...
         Expected: to be called 3 times
           Actual: never called - unsatisfied and active
I1106 20:21:44.474457 21440 master.cpp:3349] Disconnecting framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670
I1106 20:21:44.474491 21440 master.cpp:1472] Giving framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670 0ns to 
failover
I1106 20:21:44.474586 31384 slave.cpp:921] Agent terminating
I1106 20:21:44.474604 31384 slave.cpp:3952] Asked to shut down framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 by @0.0.0.0:0
I1106 20:21:44.474617 31384 slave.cpp:3977] Shutting down framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:44.474630 31384 slave.cpp:6684] Shutting down executor 
'366bc261-1790-4ef0-8e14-4f108a7832f7' of framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
W1106 20:21:44.474640 31384 slave.hpp:945] Unable to send event to executor 
'366bc261-1790-4ef0-8e14-4f108a7832f7' of framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000: unknown connection type
I1106 20:21:44.474757 21437 hierarchical.cpp:407] Deactivated framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:44.474783 21437 master.cpp:1343] Agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal) disconnected
I1106 20:21:44.474797 21437 master.cpp:3409] Disconnecting agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal)
I1106 20:21:44.474810 21437 master.cpp:3428] Deactivating agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal)
I1106 20:21:44.474831 21437 master.cpp:1367] Removing framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670 from 
disconnected agent 0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at 
slave(722)@172.16.10.72:46670 (ip-172-16-10-72.ec2.internal) because the 
framework is not checkpointing
I1106 20:21:44.474845 21437 master.cpp:9766] Removing framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670 from agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal)
I1106 20:21:44.474855 21438 hierarchical.cpp:769] Agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 deactivated
I1106 20:21:44.474879 21437 master.cpp:10313] Updating the state of task 
366bc261-1790-4ef0-8e14-4f108a7832f7 of framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (latest state: TASK_LOST, status 
update state: TASK_LOST)
I1106 20:21:44.475023 21437 master.cpp:10417] Removing task 
366bc261-1790-4ef0-8e14-4f108a7832f7 with resources cpus(allocated: *):2; 
mem(allocated: *):1024; disk(allocated: *):1024; ports(allocated: 
*):[31000-32000] of framework 0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 on 
agent 0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 at slave(722)@172.16.10.72:46670 
(ip-172-16-10-72.ec2.internal)
I1106 20:21:44.475090 21444 hierarchical.cpp:1195] Recovered cpus(allocated: 
*):2; mem(allocated: *):1024; disk(allocated: *):1024; ports(allocated: 
*):[31000-32000] (total: cpus:2; mem:1024; disk:1024; ports:[31000-32000], 
allocated: {}) on agent 0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0 from framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:44.475215 21437 containerizer.cpp:2375] Destroying container 
456a2d9d-5bdd-49d3-af3c-c0b397b96c11 in PROVISIONING state
I1106 20:21:44.475230 21437 containerizer.cpp:2993] Transitioning the state of 
container 456a2d9d-5bdd-49d3-af3c-c0b397b96c11 from PROVISIONING to DESTROYING
I1106 20:21:44.475272 21437 containerizer.cpp:2437] Waiting for the provisioner 
to complete provisioning before destroying container 
456a2d9d-5bdd-49d3-af3c-c0b397b96c11
I1106 20:21:44.475754 21439 master.cpp:8708] Framework failover timeout, 
removing framework 0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670
I1106 20:21:44.475780 21439 master.cpp:9590] Removing framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000 (default) at 
scheduler-9e5d5771-5ec9-41be-8de5-d7f965909445@172.16.10.72:46670
I1106 20:21:44.475926 21444 hierarchical.cpp:346] Removed framework 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-0000
I1106 20:21:45.385993 21437 hierarchical.cpp:1520] Performed allocation for 1 
agents in 29614ns
I1106 20:21:46.387064 21439 hierarchical.cpp:1520] Performed allocation for 1 
agents in 28678ns
I1106 20:21:47.387261 21438 hierarchical.cpp:1520] Performed allocation for 1 
agents in 28760ns
I1106 20:21:47.553958 21440 registry_puller.cpp:395] Extracting layer tar ball 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/docker/staging/TgBElC/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
 to rootfs 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/docker/staging/TgBElC/f33084b4b6032f9c77744a165174368fa4aed696ac301bfe504fa5533121690f/rootfs'
I1106 20:21:47.557152 21440 registry_puller.cpp:395] Extracting layer tar ball 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/docker/staging/TgBElC/sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde
 to rootfs 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/docker/staging/TgBElC/70034ea7817b10a77d60705f61423abcc61b8bb299571494ddbfd4f1f83f46ee/rootfs'
I1106 20:21:47.656081 21437 metadata_manager.cpp:167] Successfully cached image 
'alpine'
I1106 20:21:47.656532 21441 provisioner.cpp:546] Provisioning image rootfs 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/provisioner/containers/456a2d9d-5bdd-49d3-af3c-c0b397b96c11/backends/copy/rootfses/deed736e-ab54-4bbf-8e01-ce4f1beb9e56'
 for container 456a2d9d-5bdd-49d3-af3c-c0b397b96c11 using copy backend
I1106 20:21:47.659914 21441 copy.cpp:253] Copying layer path 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/docker/layers/70034ea7817b10a77d60705f61423abcc61b8bb299571494ddbfd4f1f83f46ee/rootfs'
 to rootfs 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/provisioner/containers/456a2d9d-5bdd-49d3-af3c-c0b397b96c11/backends/copy/rootfses/deed736e-ab54-4bbf-8e01-ce4f1beb9e56'
I1106 20:21:47.755687 21438 copy.cpp:253] Copying layer path 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_myahyo/store/docker/layers/f33084b4b6032f9c77744a165174368fa4aed696ac301bfe504fa5533121690f/rootfs'
 to rootfs 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/provisioner/containers/456a2d9d-5bdd-49d3-af3c-c0b397b96c11/backends/copy/rootfses/deed736e-ab54-4bbf-8e01-ce4f1beb9e56'
I1106 20:21:47.858675 21441 provisioner.cpp:689] Destroying container rootfs at 
'/tmp/DockerVolumeIsolatorTest_ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume_BZzGxZ/provisioner/containers/456a2d9d-5bdd-49d3-af3c-c0b397b96c11/backends/copy/rootfses/deed736e-ab54-4bbf-8e01-ce4f1beb9e56'
 for container 456a2d9d-5bdd-49d3-af3c-c0b397b96c11
../../src/tests/containerizer/docker_volume_isolator_tests.cpp:931: Failure
Actual function call count doesn't match EXPECT_CALL(*mockClient, 
unmount(volumeDriver, _))...
         Expected: to be called once
           Actual: never called - unsatisfied and active
../../src/tests/containerizer/docker_volume_isolator_tests.cpp:925: Failure
Actual function call count doesn't match EXPECT_CALL(*mockClient, 
mount(volumeDriver, _, _))...
         Expected: to be called once
           Actual: never called - unsatisfied and active
I1106 20:21:47.961442 31384 master.cpp:1185] Master terminating
I1106 20:21:47.961674 21444 hierarchical.cpp:612] Removed agent 
0d219c14-565e-46dd-b5c2-56bd9e97e4d1-S0
[  FAILED  ] 
DockerVolumeIsolatorTest.ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume
 (18596 ms)
{code}

> DockerVolumeIsolatorTest.ROOT_INTERNET_CURL_CommandTaskRootfsWithAbsolutePathVolume
>  is flaky
> --------------------------------------------------------------------------------------------
>
>                 Key: MESOS-9285
>                 URL: https://issues.apache.org/jira/browse/MESOS-9285
>             Project: Mesos
>          Issue Type: Bug
>            Reporter: Benno Evers
>            Priority: Major
>              Labels: flaky-test
>
> Observed in an internal CI run (4432) in a Debian 8 environment:
> {noformat}
> ../../src/tests/containerizer/docker_volume_isolator_tests.cpp:947
> Failed to wait 15secs for statusStarting
> {noformat}
> Sadly, the full log seems to have been lost before it could be investigated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to