[
https://issues.apache.org/jira/browse/MESOS-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15555621#comment-15555621
]
Greg Mann commented on MESOS-6336:
----------------------------------
Here's a partial log from the ASF CI as well, from 10 days ago:
{code}
I0927 06:49:21.610502 30001 http.cpp:883] Using default 'basic' HTTP
authenticator for realm 'mesos-master-readwrite'
I0927 06:49:21.610563 30003 recover.cpp:568] Updating replica status to VOTING
I0927 06:49:21.610743 30001 http.cpp:883] Using default 'basic' HTTP
authenticator for realm 'mesos-master-scheduler'
I0927 06:49:21.610916 30001 master.cpp:584] Authorization enabled
I0927 06:49:21.611145 30011 hierarchical.cpp:149] Initialized hierarchical
allocator process
I0927 06:49:21.611171 30013 whitelist_watcher.cpp:77] No whitelist given
I0927 06:49:21.611275 30009 leveldb.cpp:304] Persisting metadata (8 bytes) to
leveldb took 414250ns
I0927 06:49:21.611301 30009 replica.cpp:320] Persisted replica status to VOTING
I0927 06:49:21.611450 30008 recover.cpp:582] Successfully joined the Paxos group
I0927 06:49:21.611651 30008 recover.cpp:466] Recover process terminated
I0927 06:49:21.613910 30012 master.cpp:2013] Elected as the leading master!
I0927 06:49:21.613943 30012 master.cpp:1560] Recovering from registrar
I0927 06:49:21.614099 30013 registrar.cpp:329] Recovering registrar
I0927 06:49:21.614842 30012 log.cpp:553] Attempting to start the writer
I0927 06:49:21.616055 30014 replica.cpp:493] Replica received implicit promise
request from __req_res__(6052)@172.17.0.2:49598 with proposal 1
I0927 06:49:21.616436 30014 leveldb.cpp:304] Persisting metadata (8 bytes) to
leveldb took 345420ns
I0927 06:49:21.616459 30014 replica.cpp:342] Persisted promised to 1
I0927 06:49:21.616914 30006 coordinator.cpp:238] Coordinator attempting to fill
missing positions
I0927 06:49:21.618098 30006 replica.cpp:388] Replica received explicit promise
request from __req_res__(6053)@172.17.0.2:49598 for position 0 with proposal 2
I0927 06:49:21.618446 30006 leveldb.cpp:341] Persisting action (8 bytes) to
leveldb took 305036ns
I0927 06:49:21.618474 30006 replica.cpp:708] Persisted action NOP at position 0
I0927 06:49:21.619513 30012 replica.cpp:537] Replica received write request for
position 0 from __req_res__(6054)@172.17.0.2:49598
I0927 06:49:21.619604 30012 leveldb.cpp:436] Reading position from leveldb took
55504ns
I0927 06:49:21.619915 30012 leveldb.cpp:341] Persisting action (14 bytes) to
leveldb took 262919ns
I0927 06:49:21.619941 30012 replica.cpp:708] Persisted action NOP at position 0
I0927 06:49:21.620503 30016 replica.cpp:691] Replica received learned notice
for position 0 from @0.0.0.0:0
I0927 06:49:21.620851 30016 leveldb.cpp:341] Persisting action (16 bytes) to
leveldb took 313765ns
I0927 06:49:21.620878 30016 replica.cpp:708] Persisted action NOP at position 0
I0927 06:49:21.621417 30014 log.cpp:569] Writer started with ending position 0
I0927 06:49:21.622566 30013 leveldb.cpp:436] Reading position from leveldb took
28375ns
I0927 06:49:21.623528 30005 registrar.cpp:362] Successfully fetched the
registry (0B) in 9.373952ms
I0927 06:49:21.623668 30005 registrar.cpp:461] Applied 1 operations in 25023ns;
attempting to update the registry
I0927 06:49:21.624490 30012 log.cpp:577] Attempting to append 168 bytes to the
log
I0927 06:49:21.624620 30004 coordinator.cpp:348] Coordinator attempting to
write APPEND action at position 1
I0927 06:49:21.625282 30007 replica.cpp:537] Replica received write request for
position 1 from __req_res__(6055)@172.17.0.2:49598
I0927 06:49:21.625720 30007 leveldb.cpp:341] Persisting action (187 bytes) to
leveldb took 396032ns
I0927 06:49:21.625746 30007 replica.cpp:708] Persisted action APPEND at
position 1
I0927 06:49:21.626509 30012 replica.cpp:691] Replica received learned notice
for position 1 from @0.0.0.0:0
I0927 06:49:21.626986 30012 leveldb.cpp:341] Persisting action (189 bytes) to
leveldb took 328126ns
I0927 06:49:21.627027 30012 replica.cpp:708] Persisted action APPEND at
position 1
I0927 06:49:21.628249 30014 registrar.cpp:506] Successfully updated the
registry in 4.504832ms
I0927 06:49:21.628463 30016 log.cpp:596] Attempting to truncate the log to 1
I0927 06:49:21.628484 30014 registrar.cpp:392] Successfully recovered registrar
I0927 06:49:21.628619 30005 coordinator.cpp:348] Coordinator attempting to
write TRUNCATE action at position 2
I0927 06:49:21.629341 30010 master.cpp:1676] Recovered 0 agents from the
registry (129B); allowing 10mins for agents to re-register
I0927 06:49:21.629361 30007 hierarchical.cpp:176] Skipping recovery of
hierarchical allocator: nothing to recover
I0927 06:49:21.629873 30004 replica.cpp:537] Replica received write request for
position 2 from __req_res__(6056)@172.17.0.2:49598
I0927 06:49:21.630329 30004 leveldb.cpp:341] Persisting action (16 bytes) to
leveldb took 404029ns
I0927 06:49:21.630362 30004 replica.cpp:708] Persisted action TRUNCATE at
position 2
I0927 06:49:21.631240 30004 replica.cpp:691] Replica received learned notice
for position 2 from @0.0.0.0:0
I0927 06:49:21.631793 30004 leveldb.cpp:341] Persisting action (18 bytes) to
leveldb took 511354ns
I0927 06:49:21.631860 30004 leveldb.cpp:399] Deleting ~1 keys from leveldb took
34696ns
I0927 06:49:21.631886 30004 replica.cpp:708] Persisted action TRUNCATE at
position 2
I0927 06:49:21.638070 30002 slave.cpp:208] Mesos agent started on
172.17.0.2:49598
I0927 06:49:21.638463 29982 scheduler.cpp:176] Version: 1.1.0
I0927 06:49:21.638097 30002 slave.cpp:209] Flags at startup: --acls=""
--appc_simple_discovery_uri_prefix="http://"
--appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="true"
--authenticate_http_readwrite="true" --authenticatee="crammd5"
--authentication_backoff_factor="1secs" --authorizer="local"
--cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false"
--cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false"
--cgroups_root="mesos" --container_disk_watch_interval="15secs"
--containerizers="mesos"
--credential="/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_10tM7T/credential"
--default_role="*" --disk_watch_interval="1mins" --docker="docker"
--docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io"
--docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock"
--docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker"
--docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume"
--enforce_container_disk_quota="false" --executor_registration_timeout="1mins"
--executor_shutdown_grace_period="5secs"
--fetcher_cache_dir="/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_10tM7T/fetch"
--fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks"
--gc_disk_headroom="0.1" --hadoop_home="" --help="false"
--hostname_lookup="true" --http_authenticators="basic"
--http_command_executor="false"
--http_credentials="/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_10tM7T/http_credentials"
--image_provisioner_backend="copy" --initialize_driver_logging="true"
--isolation="posix/cpu,posix/mem" --launcher="posix"
--launcher_dir="/mesos/mesos-1.1.0/_build/src" --logbufsecs="0"
--logging_level="INFO" --oversubscribed_resources_interval="15secs"
--perf_duration="10secs" --perf_interval="1mins"
--qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect"
--recovery_timeout="15mins" --registration_backoff_factor="10ms"
--resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"
--revocable_cpu_low_priority="true"
--runtime_dir="/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_10tM7T"
--sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true"
--systemd_enable_support="true"
--systemd_runtime_directory="/run/systemd/system" --version="false"
--work_dir="/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_3wxEnd"
I0927 06:49:21.638656 30002 credentials.hpp:86] Loading credential for
authentication from
'/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_10tM7T/credential'
I0927 06:49:21.638821 30002 slave.cpp:346] Agent using credential for:
test-principal
I0927 06:49:21.638850 30002 credentials.hpp:37] Loading credentials for
authentication from
'/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_10tM7T/http_credentials'
I0927 06:49:21.639042 30014 scheduler.cpp:465] New master detected at
[email protected]:49598
I0927 06:49:21.639075 30014 scheduler.cpp:474] Waiting for 0ns before
initiating a re-(connection) attempt with the master
I0927 06:49:21.639155 30002 http.cpp:883] Using default 'basic' HTTP
authenticator for realm 'mesos-agent-readonly'
I0927 06:49:21.639292 30002 http.cpp:883] Using default 'basic' HTTP
authenticator for realm 'mesos-agent-readwrite'
I0927 06:49:21.640199 30002 slave.cpp:533] Agent resources: cpus(*):2;
mem(*):1024; disk(*):1024; ports(*):[31000-32000]
I0927 06:49:21.640302 30002 slave.cpp:541] Agent attributes: [ ]
I0927 06:49:21.640321 30002 slave.cpp:546] Agent hostname: 2ff7532774e7
I0927 06:49:21.642418 30010 state.cpp:57] Recovering state from
'/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_3wxEnd/meta'
I0927 06:49:21.642446 30014 scheduler.cpp:353] Connected with the master at
http://172.17.0.2:49598/master/api/v1/scheduler
I0927 06:49:21.643204 30003 status_update_manager.cpp:203] Recovering status
update manager
I0927 06:49:21.643524 30002 slave.cpp:5252] Finished recovery
I0927 06:49:21.643932 30002 slave.cpp:5424] Querying resource estimator for
oversubscribable resources
I0927 06:49:21.644057 30003 scheduler.cpp:235] Sending SUBSCRIBE call to
http://172.17.0.2:49598/master/api/v1/scheduler
I0927 06:49:21.644240 30014 status_update_manager.cpp:177] Pausing sending
status updates
I0927 06:49:21.644265 30001 slave.cpp:915] New master detected at
[email protected]:49598
I0927 06:49:21.644299 30001 slave.cpp:974] Authenticating with master
[email protected]:49598
I0927 06:49:21.644361 30001 slave.cpp:985] Using default CRAM-MD5 authenticatee
I0927 06:49:21.644495 30001 slave.cpp:947] Detecting new master
I0927 06:49:21.644623 30008 authenticatee.cpp:121] Creating new client SASL
connection
I0927 06:49:21.645005 30007 master.cpp:6583] Authenticating
(80)@172.17.0.2:49598
I0927 06:49:21.645143 30012 authenticator.cpp:414] Starting authentication
session for crammd5-authenticatee(952)@172.17.0.2:49598
I0927 06:49:21.645239 30007 process.cpp:3336] Handling HTTP event for process
'master' with path: '/master/api/v1/scheduler'
I0927 06:49:21.645402 30009 authenticator.cpp:98] Creating new server SASL
connection
I0927 06:49:21.645733 30015 authenticatee.cpp:213] Received SASL authentication
mechanisms: CRAM-MD5
I0927 06:49:21.645762 30015 authenticatee.cpp:239] Attempting to authenticate
with mechanism 'CRAM-MD5'
I0927 06:49:21.645889 30006 authenticator.cpp:204] Received SASL authentication
start
I0927 06:49:21.646040 30006 authenticator.cpp:326] Authentication requires more
steps
I0927 06:49:21.646215 30006 authenticatee.cpp:259] Received SASL authentication
step
I0927 06:49:21.646239 30009 http.cpp:382] HTTP POST for
/master/api/v1/scheduler from 172.17.0.2:35887
I0927 06:49:21.646350 30016 authenticator.cpp:232] Received SASL authentication
step
I0927 06:49:21.646409 30016 auxprop.cpp:109] Request to lookup properties for
user: 'test-principal' realm: '2ff7532774e7' server FQDN: '2ff7532774e7'
SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false
SASL_AUXPROP_AUTHZID: false
I0927 06:49:21.646426 30009 master.cpp:2309] Received subscription request for
HTTP framework 'default'
I0927 06:49:21.646436 30016 auxprop.cpp:181] Looking up auxiliary property
'*userPassword'
I0927 06:49:21.646495 30016 auxprop.cpp:181] Looking up auxiliary property
'*cmusaslsecretCRAM-MD5'
I0927 06:49:21.646498 30009 master.cpp:2049] Authorizing framework principal
'test-principal' to receive offers for role '*'
I0927 06:49:21.646531 30016 auxprop.cpp:109] Request to lookup properties for
user: 'test-principal' realm: '2ff7532774e7' server FQDN: '2ff7532774e7'
SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false
SASL_AUXPROP_AUTHZID: true
I0927 06:49:21.646555 30016 auxprop.cpp:131] Skipping auxiliary property
'*userPassword' since SASL_AUXPROP_AUTHZID == true
I0927 06:49:21.646570 30016 auxprop.cpp:131] Skipping auxiliary property
'*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
I0927 06:49:21.646592 30016 authenticator.cpp:318] Authentication success
I0927 06:49:21.646730 30015 authenticatee.cpp:299] Authentication success
I0927 06:49:21.646796 30009 master.cpp:6613] Successfully authenticated
principal 'test-principal' at (80)@172.17.0.2:49598
I0927 06:49:21.646826 30016 authenticator.cpp:432] Authentication session
cleanup for crammd5-authenticatee(952)@172.17.0.2:49598
I0927 06:49:21.647150 30009 master.cpp:2407] Subscribing framework 'default'
with checkpointing disabled and capabilities [ ]
I0927 06:49:21.647274 30013 slave.cpp:1069] Successfully authenticated with
master [email protected]:49598
I0927 06:49:21.647424 30013 slave.cpp:1475] Will retry registration in
10.476161ms if necessary
I0927 06:49:21.647775 30006 master.hpp:2162] Sending heartbeat to
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.648042 30001 hierarchical.cpp:272] Added framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.648084 30009 master.cpp:5040] Registering agent at
(80)@172.17.0.2:49598 (2ff7532774e7) with id
60695314-29d7-46b2-87fd-00530b392759-S0
I0927 06:49:21.648107 30001 hierarchical.cpp:1691] No allocations performed
I0927 06:49:21.648160 30001 hierarchical.cpp:1786] No inverse offers to send
out!
I0927 06:49:21.648216 30001 hierarchical.cpp:1283] Performed allocation for 0
agents in 149773ns
I0927 06:49:21.648479 30011 scheduler.cpp:671] Enqueuing event SUBSCRIBED
received from http://172.17.0.2:49598/master/api/v1/scheduler
I0927 06:49:21.648629 30015 registrar.cpp:461] Applied 1 operations in 58240ns;
attempting to update the registry
I0927 06:49:21.648905 30011 scheduler.cpp:671] Enqueuing event HEARTBEAT
received from http://172.17.0.2:49598/master/api/v1/scheduler
I0927 06:49:21.649503 30015 log.cpp:577] Attempting to append 337 bytes to the
log
I0927 06:49:21.649648 30016 coordinator.cpp:348] Coordinator attempting to
write APPEND action at position 3
I0927 06:49:21.650629 30006 replica.cpp:537] Replica received write request for
position 3 from __req_res__(6057)@172.17.0.2:49598
I0927 06:49:21.650826 30006 leveldb.cpp:341] Persisting action (356 bytes) to
leveldb took 161658ns
I0927 06:49:21.650853 30006 replica.cpp:708] Persisted action APPEND at
position 3
I0927 06:49:21.651432 30009 replica.cpp:691] Replica received learned notice
for position 3 from @0.0.0.0:0
I0927 06:49:21.651834 30009 leveldb.cpp:341] Persisting action (358 bytes) to
leveldb took 369012ns
I0927 06:49:21.651857 30009 replica.cpp:708] Persisted action APPEND at
position 3
I0927 06:49:21.652999 30012 registrar.cpp:506] Successfully updated the
registry in 4.314112ms
I0927 06:49:21.653235 30011 log.cpp:596] Attempting to truncate the log to 3
I0927 06:49:21.653347 30006 coordinator.cpp:348] Coordinator attempting to
write TRUNCATE action at position 4
I0927 06:49:21.653635 30002 slave.cpp:4108] Received ping from
slave-observer(440)@172.17.0.2:49598
I0927 06:49:21.653779 30009 master.cpp:5111] Registered agent
60695314-29d7-46b2-87fd-00530b392759-S0 at (80)@172.17.0.2:49598 (2ff7532774e7)
with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
I0927 06:49:21.653841 30001 slave.cpp:1115] Registered with master
[email protected]:49598; given agent ID 60695314-29d7-46b2-87fd-00530b392759-S0
I0927 06:49:21.653923 30001 fetcher.cpp:86] Clearing fetcher cache
I0927 06:49:21.654047 30004 replica.cpp:537] Replica received write request for
position 4 from __req_res__(6058)@172.17.0.2:49598
I0927 06:49:21.654327 30016 status_update_manager.cpp:184] Resuming sending
status updates
I0927 06:49:21.654356 30003 hierarchical.cpp:482] Added agent
60695314-29d7-46b2-87fd-00530b392759-S0 (2ff7532774e7) with cpus(*):2;
mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {})
I0927 06:49:21.654451 30004 leveldb.cpp:341] Persisting action (16 bytes) to
leveldb took 359571ns
I0927 06:49:21.654477 30004 replica.cpp:708] Persisted action TRUNCATE at
position 4
I0927 06:49:21.654500 30001 slave.cpp:1138] Checkpointing SlaveInfo to
'/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_3wxEnd/meta/slaves/60695314-29d7-46b2-87fd-00530b392759-S0/slave.info'
I0927 06:49:21.655000 30013 replica.cpp:691] Replica received learned notice
for position 4 from @0.0.0.0:0
I0927 06:49:21.655390 30013 leveldb.cpp:341] Persisting action (18 bytes) to
leveldb took 358411ns
I0927 06:49:21.655422 30003 hierarchical.cpp:1786] No inverse offers to send
out!
I0927 06:49:21.655467 30013 leveldb.cpp:399] Deleting ~2 keys from leveldb took
45333ns
I0927 06:49:21.655485 30003 hierarchical.cpp:1306] Performed allocation for
agent 60695314-29d7-46b2-87fd-00530b392759-S0 in 1.091903ms
I0927 06:49:21.655491 30013 replica.cpp:708] Persisted action TRUNCATE at
position 4
I0927 06:49:21.655802 30006 master.cpp:6412] Sending 1 offers to framework
60695314-29d7-46b2-87fd-00530b392759-0000 (default)
I0927 06:49:21.657603 30002 scheduler.cpp:671] Enqueuing event OFFERS received
from http://172.17.0.2:49598/master/api/v1/scheduler
I0927 06:49:21.660387 30003 scheduler.cpp:235] Sending ACCEPT call to
http://172.17.0.2:49598/master/api/v1/scheduler
I0927 06:49:21.661532 30010 process.cpp:3336] Handling HTTP event for process
'master' with path: '/master/api/v1/scheduler'
I0927 06:49:21.662504 30008 http.cpp:382] HTTP POST for
/master/api/v1/scheduler from 172.17.0.2:35886
I0927 06:49:21.663599 30008 master.cpp:3521] Processing ACCEPT call for offers:
[ 60695314-29d7-46b2-87fd-00530b392759-O0 ] on agent
60695314-29d7-46b2-87fd-00530b392759-S0 at (80)@172.17.0.2:49598 (2ff7532774e7)
for framework 60695314-29d7-46b2-87fd-00530b392759-0000 (default)
I0927 06:49:21.663710 30008 master.cpp:3143] Authorizing framework principal
'test-principal' to launch task f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d
I0927 06:49:21.664000 30008 master.cpp:3143] Authorizing framework principal
'test-principal' to launch task dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb
I0927 06:49:21.667019 30008 master.cpp:8159] Adding task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d with resources cpus(*):0.1; mem(*):32;
disk(*):32 on agent 60695314-29d7-46b2-87fd-00530b392759-S0 (2ff7532774e7)
I0927 06:49:21.667392 30008 master.cpp:8159] Adding task
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb with resources cpus(*):0.1; mem(*):32;
disk(*):32 on agent 60695314-29d7-46b2-87fd-00530b392759-S0 (2ff7532774e7)
I0927 06:49:21.667594 30008 master.cpp:4324] Launching task group {
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb, f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d } of
framework 60695314-29d7-46b2-87fd-00530b392759-0000 (default) with resources
cpus(*):0.2; mem(*):64; disk(*):64 on agent
60695314-29d7-46b2-87fd-00530b392759-S0 at (80)@172.17.0.2:49598 (2ff7532774e7)
I0927 06:49:21.668198 30010 slave.cpp:1539] Got assigned task group containing
tasks [ f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d,
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb ] for framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.668897 30015 hierarchical.cpp:1015] Recovered cpus(*):1.8;
mem(*):960; disk(*):960; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024;
disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):0.2; mem(*):64;
disk(*):64) on agent 60695314-29d7-46b2-87fd-00530b392759-S0 from framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.669008 30015 hierarchical.cpp:1052] Framework
60695314-29d7-46b2-87fd-00530b392759-0000 filtered agent
60695314-29d7-46b2-87fd-00530b392759-S0 for 5secs
I0927 06:49:21.669728 30004 scheduler.cpp:235] Sending KILL call to
http://172.17.0.2:49598/master/api/v1/scheduler
I0927 06:49:21.670667 30012 process.cpp:3336] Handling HTTP event for process
'master' with path: '/master/api/v1/scheduler'
I0927 06:49:21.671666 30013 http.cpp:382] HTTP POST for
/master/api/v1/scheduler from 172.17.0.2:35886
I0927 06:49:21.671850 30013 master.cpp:4648] Telling agent
60695314-29d7-46b2-87fd-00530b392759-S0 at (80)@172.17.0.2:49598 (2ff7532774e7)
to kill task f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000 (default)
I0927 06:49:21.672125 30012 slave.cpp:2283] Asked to kill task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000
W0927 06:49:21.672179 30012 slave.cpp:2324] Killing task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000 before it was launched
I0927 06:49:21.673012 29982 slave.cpp:1696] Launching task group containing
tasks [ f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d,
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb ] for framework
60695314-29d7-46b2-87fd-00530b392759-0000
W0927 06:49:21.673107 29982 slave.cpp:1736] Ignoring running task group
containing tasks [ f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d,
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb ] of framework
60695314-29d7-46b2-87fd-00530b392759-0000 because it has been killed in the
meantime
I0927 06:49:21.673208 29982 slave.cpp:3609] Handling status update TASK_KILLED
(UUID: f4881afd-35e5-4f33-8062-c87d97c5e834) for task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000 from @0.0.0.0:0
W0927 06:49:21.673300 29982 slave.cpp:3705] Could not find the executor for
status update TASK_KILLED (UUID: f4881afd-35e5-4f33-8062-c87d97c5e834) for task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.673576 30014 status_update_manager.cpp:323] Received status
update TASK_KILLED (UUID: f4881afd-35e5-4f33-8062-c87d97c5e834) for task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.673630 30014 status_update_manager.cpp:500] Creating
StatusUpdate stream for task f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.673640 29982 slave.cpp:3609] Handling status update TASK_KILLED
(UUID: 9af5bed5-cb33-4e1f-925d-6604ea952c8e) for task
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000 from @0.0.0.0:0
W0927 06:49:21.673707 29982 slave.cpp:3705] Could not find the executor for
status update TASK_KILLED (UUID: 9af5bed5-cb33-4e1f-925d-6604ea952c8e) for task
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.674010 29982 slave.cpp:4709] Cleaning up framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.674015 30014 status_update_manager.cpp:377] Forwarding update
TASK_KILLED (UUID: f4881afd-35e5-4f33-8062-c87d97c5e834) for task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000 to the agent
E0927 06:49:21.674130 29982 slave.cpp:5404] Failed to find the mtime of
'/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_3wxEnd/slaves/60695314-29d7-46b2-87fd-00530b392759-S0/frameworks/60695314-29d7-46b2-87fd-00530b392759-0000':
Error invoking stat for
'/tmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_3wxEnd/slaves/60695314-29d7-46b2-87fd-00530b392759-S0/frameworks/60695314-29d7-46b2-87fd-00530b392759-0000':
No such file or directory
I0927 06:49:21.674252 30003 slave.cpp:4026] Forwarding the update TASK_KILLED
(UUID: f4881afd-35e5-4f33-8062-c87d97c5e834) for task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000 to [email protected]:49598
I0927 06:49:21.674278 30014 status_update_manager.cpp:323] Received status
update TASK_KILLED (UUID: 9af5bed5-cb33-4e1f-925d-6604ea952c8e) for task
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.674335 30014 status_update_manager.cpp:500] Creating
StatusUpdate stream for task dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.674660 30014 status_update_manager.cpp:377] Forwarding update
TASK_KILLED (UUID: 9af5bed5-cb33-4e1f-925d-6604ea952c8e) for task
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000 to the agent
I0927 06:49:21.674759 30003 slave.cpp:3920] Status update manager successfully
handled status update TASK_KILLED (UUID: f4881afd-35e5-4f33-8062-c87d97c5e834)
for task f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.674859 30001 master.cpp:5638] Status update TASK_KILLED (UUID:
f4881afd-35e5-4f33-8062-c87d97c5e834) for task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000 from agent
60695314-29d7-46b2-87fd-00530b392759-S0 at (80)@172.17.0.2:49598 (2ff7532774e7)
I0927 06:49:21.674917 30001 master.cpp:5700] Forwarding status update
TASK_KILLED (UUID: f4881afd-35e5-4f33-8062-c87d97c5e834) for task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.674911 30004 slave.cpp:4026] Forwarding the update TASK_KILLED
(UUID: 9af5bed5-cb33-4e1f-925d-6604ea952c8e) for task
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000 to [email protected]:49598
I0927 06:49:21.675050 30014 status_update_manager.cpp:285] Closing status
update streams for framework 60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.675107 30014 status_update_manager.cpp:531] Cleaning up status
update stream for task dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.675217 30001 master.cpp:7537] Updating the state of task
f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000 (latest state: TASK_KILLED, status
update state: TASK_KILLED)
I0927 06:49:21.675226 30004 slave.cpp:3920] Status update manager successfully
handled status update TASK_KILLED (UUID: 9af5bed5-cb33-4e1f-925d-6604ea952c8e)
for task dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.675247 30014 status_update_manager.cpp:531] Cleaning up status
update stream for task f3076a19-39a8-4a97-9b9c-fe7ccc0cdb4d of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.675601 30001 master.cpp:5638] Status update TASK_KILLED (UUID:
9af5bed5-cb33-4e1f-925d-6604ea952c8e) for task
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000 from agent
60695314-29d7-46b2-87fd-00530b392759-S0 at (80)@172.17.0.2:49598 (2ff7532774e7)
I0927 06:49:21.675647 30001 master.cpp:5700] Forwarding status update
TASK_KILLED (UUID: 9af5bed5-cb33-4e1f-925d-6604ea952c8e) for task
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.675884 30001 master.cpp:7537] Updating the state of task
dd887bd4-4577-40f5-a5d0-c1dcf8f5ddbb of framework
60695314-29d7-46b2-87fd-00530b392759-0000 (latest state: TASK_KILLED, status
update state: TASK_KILLED)
I0927 06:49:21.676050 30006 hierarchical.cpp:1015] Recovered cpus(*):0.1;
mem(*):32; disk(*):32 (total: cpus(*):2; mem(*):1024; disk(*):1024;
ports(*):[31000-32000], allocated: cpus(*):0.1; mem(*):32; disk(*):32) on agent
60695314-29d7-46b2-87fd-00530b392759-S0 from framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.676095 30005 scheduler.cpp:671] Enqueuing event UPDATE received
from http://172.17.0.2:49598/master/api/v1/scheduler
I0927 06:49:21.676484 30006 hierarchical.cpp:1015] Recovered cpus(*):0.1;
mem(*):32; disk(*):32 (total: cpus(*):2; mem(*):1024; disk(*):1024;
ports(*):[31000-32000], allocated: {}) on agent
60695314-29d7-46b2-87fd-00530b392759-S0 from framework
60695314-29d7-46b2-87fd-00530b392759-0000
I0927 06:49:21.676689 30014 scheduler.cpp:671] Enqueuing event UPDATE received
from http://172.17.0.2:49598/master/api/v1/scheduler
I0927 06:49:21.677886 30009 slave.cpp:787] Agent terminating
*** Aborted at 1474958961 (unix time) try "date -d @1474958961" if you are
using GNU date ***
PC: @ 0x7f48872722e0 mesos::FrameworkInfo::checkpoint()
*** SIGSEGV (@0x88) received by PID 29982 (TID 0x7f4876d4b700) from PID 136;
stack trace: ***
@ 0x7f4870c7c2f5 (unknown)
@ 0x7f4870c80ec1 (unknown)
@ 0x7f4870c751b8 (unknown)
@ 0x7f4882f24100 (unknown)
@ 0x7f48872722e0 mesos::FrameworkInfo::checkpoint()
@ 0x7f48876ef23e mesos::internal::slave::Slave::finalize()
@ 0x7f48883092eb process::ProcessBase::visit()
@ 0x7f488830ef18 process::TerminateEvent::visit()
@ 0xa2dc06 process::ProcessBase::serve()
@ 0x7f488830302b process::ProcessManager::resume()
@ 0x7f48882ffd19 _ZZN7process14ProcessManager12init_threadsEvENKUt_clEv
@ 0x7f488830e5bc
_ZNSt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUt_vEE9_M_invokeIIEEEvSt12_Index_tupleIIXspT_EEE
@ 0x7f488830e513
_ZNSt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUt_vEEclEv
@ 0x7f488830e4ac
_ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUt_vEEE6_M_runEv
@ 0x7f48828b6220 (unknown)
@ 0x7f4882f1cdc5 start_thread
@ 0x7f488201dced __clone
make[4]: *** [check-local] Segmentation fault
make[4]: Leaving directory `/mesos/mesos-1.1.0/_build/src'
make[3]: *** [check-am] Error 2
make[3]: Leaving directory `/mesos/mesos-1.1.0/_build/src'
make[2]: *** [check] Error 2
make[2]: Leaving directory `/mesos/mesos-1.1.0/_build/src'
make[1]: *** [check-recursive] Error 1
make[1]: Leaving directory `/mesos/mesos-1.1.0/_build'
make: *** [distcheck] Error 1
{code}
> SlaveTest.KillTaskGroupBetweenRunTaskParts is flaky
> ---------------------------------------------------
>
> Key: MESOS-6336
> URL: https://issues.apache.org/jira/browse/MESOS-6336
> Project: Mesos
> Issue Type: Bug
> Components: slave
> Reporter: Greg Mann
> Labels: mesosphere
>
> The test {{SlaveTest.KillTaskGroupBetweenRunTaskParts}} sometimes segfaults
> during the agent's {{finalize()}} method. This was observed on our internal
> CI, on Fedora with libev, without SSL:
> {code}
> [ RUN ] SlaveTest.KillTaskGroupBetweenRunTaskParts
> I1007 14:12:57.973811 28630 cluster.cpp:158] Creating default 'local'
> authorizer
> I1007 14:12:57.982128 28630 leveldb.cpp:174] Opened db in 8.195028ms
> I1007 14:12:57.982599 28630 leveldb.cpp:181] Compacted db in 446238ns
> I1007 14:12:57.982616 28630 leveldb.cpp:196] Created db iterator in 3650ns
> I1007 14:12:57.982622 28630 leveldb.cpp:202] Seeked to beginning of db in
> 451ns
> I1007 14:12:57.982627 28630 leveldb.cpp:271] Iterated through 0 keys in the
> db in 352ns
> I1007 14:12:57.982638 28630 replica.cpp:776] Replica recovered with log
> positions 0 -> 0 with 1 holes and 0 unlearned
> I1007 14:12:57.983024 28645 recover.cpp:451] Starting replica recovery
> I1007 14:12:57.983127 28651 recover.cpp:477] Replica is in EMPTY status
> I1007 14:12:57.983459 28644 replica.cpp:673] Replica in EMPTY status received
> a broadcasted recover request from __req_res__(6234)@172.30.2.161:38776
> I1007 14:12:57.983543 28651 recover.cpp:197] Received a recover response from
> a replica in EMPTY status
> I1007 14:12:57.983680 28650 recover.cpp:568] Updating replica status to
> STARTING
> I1007 14:12:57.983990 28648 master.cpp:380] Master
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb
> (ip-172-30-2-161.ec2.internal.mesosphere.io) started on 172.30.2.161:38776
> I1007 14:12:57.984007 28648 master.cpp:382] Flags at startup: --acls=""
> --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins"
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate_agents="true" --authenticate_frameworks="true"
> --authenticate_http_frameworks="true" --authenticate_http_readonly="true"
> --authenticate_http_readwrite="true" --authenticators="crammd5"
> --authorizers="local" --credentials="/tmp/rVbcaO/credentials"
> --framework_sorter="drf" --help="false" --hostname_lookup="true"
> --http_authenticators="basic" --http_framework_authenticators="basic"
> --initialize_driver_logging="true" --log_auto_initialize="true"
> --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5"
> --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000"
> --quiet="false" --recovery_agent_removal_limit="100%"
> --registry="replicated_log" --registry_fetch_timeout="1mins"
> --registry_gc_interval="15mins" --registry_max_agent_age="2weeks"
> --registry_max_agent_count="102400" --registry_store_timeout="100secs"
> --registry_strict="false" --root_submissions="true" --user_sorter="drf"
> --version="false" --webui_dir="/usr/local/share/mesos/webui"
> --work_dir="/tmp/rVbcaO/master" --zk_session_timeout="10secs"
> I1007 14:12:57.984127 28648 master.cpp:432] Master only allowing
> authenticated frameworks to register
> I1007 14:12:57.984134 28648 master.cpp:446] Master only allowing
> authenticated agents to register
> I1007 14:12:57.984139 28648 master.cpp:459] Master only allowing
> authenticated HTTP frameworks to register
> I1007 14:12:57.984143 28648 credentials.hpp:37] Loading credentials for
> authentication from '/tmp/rVbcaO/credentials'
> I1007 14:12:57.988487 28648 master.cpp:504] Using default 'crammd5'
> authenticator
> I1007 14:12:57.988530 28648 http.cpp:883] Using default 'basic' HTTP
> authenticator for realm 'mesos-master-readonly'
> I1007 14:12:57.988585 28648 http.cpp:883] Using default 'basic' HTTP
> authenticator for realm 'mesos-master-readwrite'
> I1007 14:12:57.988648 28648 http.cpp:883] Using default 'basic' HTTP
> authenticator for realm 'mesos-master-scheduler'
> I1007 14:12:57.988680 28648 master.cpp:584] Authorization enabled
> I1007 14:12:57.988757 28650 whitelist_watcher.cpp:77] No whitelist given
> I1007 14:12:57.988772 28646 hierarchical.cpp:149] Initialized hierarchical
> allocator process
> I1007 14:12:57.988917 28651 leveldb.cpp:304] Persisting metadata (8 bytes) to
> leveldb took 5.186917ms
> I1007 14:12:57.988934 28651 replica.cpp:320] Persisted replica status to
> STARTING
> I1007 14:12:57.989045 28651 recover.cpp:477] Replica is in STARTING status
> I1007 14:12:57.989316 28648 master.cpp:2013] Elected as the leading master!
> I1007 14:12:57.989331 28648 master.cpp:1560] Recovering from registrar
> I1007 14:12:57.989408 28649 replica.cpp:673] Replica in STARTING status
> received a broadcasted recover request from
> __req_res__(6235)@172.30.2.161:38776
> I1007 14:12:57.989423 28648 registrar.cpp:329] Recovering registrar
> I1007 14:12:57.989792 28647 recover.cpp:197] Received a recover response from
> a replica in STARTING status
> I1007 14:12:57.989956 28650 recover.cpp:568] Updating replica status to VOTING
> I1007 14:12:57.990365 28651 leveldb.cpp:304] Persisting metadata (8 bytes) to
> leveldb took 326153ns
> I1007 14:12:57.990378 28651 replica.cpp:320] Persisted replica status to
> VOTING
> I1007 14:12:57.990412 28651 recover.cpp:582] Successfully joined the Paxos
> group
> I1007 14:12:57.990470 28651 recover.cpp:466] Recover process terminated
> I1007 14:12:57.990602 28647 log.cpp:553] Attempting to start the writer
> I1007 14:12:57.990994 28647 replica.cpp:493] Replica received implicit
> promise request from __req_res__(6236)@172.30.2.161:38776 with proposal 1
> I1007 14:12:57.992638 28647 leveldb.cpp:304] Persisting metadata (8 bytes) to
> leveldb took 1.62248ms
> I1007 14:12:57.992655 28647 replica.cpp:342] Persisted promised to 1
> I1007 14:12:57.992843 28646 coordinator.cpp:238] Coordinator attempting to
> fill missing positions
> I1007 14:12:57.993249 28650 replica.cpp:388] Replica received explicit
> promise request from __req_res__(6237)@172.30.2.161:38776 for position 0 with
> proposal 2
> I1007 14:12:57.995498 28650 leveldb.cpp:341] Persisting action (8 bytes) to
> leveldb took 2.225608ms
> I1007 14:12:57.995515 28650 replica.cpp:708] Persisted action NOP at position > 0
> I1007 14:12:57.996105 28649 replica.cpp:537] Replica received write request
> for position 0 from __req_res__(6238)@172.30.2.161:38776
> I1007 14:12:57.996130 28649 leveldb.cpp:436] Reading position from leveldb
> took 8903ns
> I1007 14:12:57.997609 28649 leveldb.cpp:341] Persisting action (14 bytes) to
> leveldb took 1.465755ms
> I1007 14:12:57.997627 28649 replica.cpp:708] Persisted action NOP at position > 0
> I1007 14:12:57.997859 28646 replica.cpp:691] Replica received learned notice
> for position 0 from @0.0.0.0:0
> I1007 14:12:57.999869 28646 leveldb.cpp:341] Persisting action (16 bytes) to
> leveldb took 1.985345ms
> I1007 14:12:57.999886 28646 replica.cpp:708] Persisted action NOP at position > 0
> I1007 14:12:58.000043 28644 log.cpp:569] Writer started with ending position 0
> I1007 14:12:58.000308 28650 leveldb.cpp:436] Reading position from leveldb
> took 8280ns
> I1007 14:12:58.000512 28650 registrar.cpp:362] Successfully fetched the
> registry (0B) in 11.06816ms
> I1007 14:12:58.000562 28650 registrar.cpp:461] Applied 1 operations in
> 3050ns; attempting to update the registry
> I1007 14:12:58.000722 28646 log.cpp:577] Attempting to append 235 bytes to
> the log
> I1007 14:12:58.000799 28651 coordinator.cpp:348] Coordinator attempting to
> write APPEND action at position 1
> I1007 14:12:58.001106 28648 replica.cpp:537] Replica received write request
> for position 1 from __req_res__(6239)@172.30.2.161:38776
> I1007 14:12:58.002013 28648 leveldb.cpp:341] Persisting action (254 bytes) to
> leveldb took 884026ns
> I1007 14:12:58.002027 28648 replica.cpp:708] Persisted action APPEND at
> position 1
> I1007 14:12:58.002336 28650 replica.cpp:691] Replica received learned notice
> for position 1 from @0.0.0.0:0
> I1007 14:12:58.004251 28650 leveldb.cpp:341] Persisting action (256 bytes) to
> leveldb took 1.897221ms
> I1007 14:12:58.004268 28650 replica.cpp:708] Persisted action APPEND at
> position 1
> I1007 14:12:58.004483 28645 registrar.cpp:506] Successfully updated the
> registry in 3.902208ms
> I1007 14:12:58.004537 28646 log.cpp:596] Attempting to truncate the log to 1
> I1007 14:12:58.004566 28644 coordinator.cpp:348] Coordinator attempting to
> write TRUNCATE action at position 2
> I1007 14:12:58.004580 28645 registrar.cpp:392] Successfully recovered
> registrar
> I1007 14:12:58.004871 28650 master.cpp:1676] Recovered 0 agents from the
> registry (196B); allowing 10mins for agents to re-register
> I1007 14:12:58.004896 28648 replica.cpp:537] Replica received write request
> for position 2 from __req_res__(6240)@172.30.2.161:38776
> I1007 14:12:58.004914 28649 hierarchical.cpp:176] Skipping recovery of
> hierarchical allocator: nothing to recover
> I1007 14:12:58.006487 28648 leveldb.cpp:341] Persisting action (16 bytes) to
> leveldb took 1.566923ms
> I1007 14:12:58.006503 28648 replica.cpp:708] Persisted action TRUNCATE at
> position 2
> I1007 14:12:58.006700 28651 replica.cpp:691] Replica received learned notice
> for position 2 from @0.0.0.0:0
> I1007 14:12:58.008687 28651 leveldb.cpp:341] Persisting action (18 bytes) to
> leveldb took 1.966404ms
> I1007 14:12:58.008713 28651 leveldb.cpp:399] Deleting ~1 keys from leveldb
> took 10225ns
> I1007 14:12:58.008721 28651 replica.cpp:708] Persisted action TRUNCATE at
> position 2
> I1007 14:12:58.010568 28645 slave.cpp:208] Mesos agent started on
> @172.30.2.161:38776
> I1007 14:12:58.010679 28630 scheduler.cpp:176] Version: 1.1.0
> I1007 14:12:58.010578 28645 slave.cpp:209] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://"
> --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="true"
> --authenticate_http_readwrite="true" --authenticatee="crammd5"
> --authentication_backoff_factor="1secs" --authorizer="local"
> --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false"
> --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false"
> --cgroups_root="mesos" --container_disk_watch_interval="15secs"
> --containerizers="mesos"
> --credential="/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_dv7FC5/credential"
> --default_role="*" --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io"
> --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock"
> --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker"
> --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume"
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins"
> --executor_shutdown_grace_period="5secs"
> --fetcher_cache_dir="/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_dv7FC5/fetch"
> --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks"
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false"
> --hostname_lookup="true" --http_authenticators="basic"
> --http_command_executor="false"
> --http_credentials="/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_dv7FC5/http_credentials"
> --image_provisioner_backend="copy" --initialize_driver_logging="true"
> --isolation="posix/cpu,posix/mem" --launcher="linux"
> --launcher_dir="/mnt/teamcity/work/4240ba9ddd0997c3/build/src"
> --logbufsecs="0" --logging_level="INFO"
> --oversubscribed_resources_interval="15secs" --perf_duration="10secs"
> --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false"
> --recover="reconnect" --recovery_timeout="15mins"
> --registration_backoff_factor="10ms"
> --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"
> --revocable_cpu_low_priority="true"
> --runtime_dir="/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_dv7FC5"
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true"
> --switch_user="true" --systemd_enable_support="true"
> --systemd_runtime_directory="/run/systemd/system" --version="false"
> --work_dir="/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_UtMqTW"
> --xfs_project_range="[5000-10000]"
> I1007 14:12:58.010781 28645 credentials.hpp:86] Loading credential for
> authentication from
> '/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_dv7FC5/credential'
> I1007 14:12:58.010834 28645 slave.cpp:346] Agent using credential for:
> test-principal
> I1007 14:12:58.010850 28645 credentials.hpp:37] Loading credentials for
> authentication from
> '/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_dv7FC5/http_credentials'
> I1007 14:12:58.010880 28644 scheduler.cpp:465] New master detected at
> [email protected]:38776
> I1007 14:12:58.010898 28644 scheduler.cpp:474] Waiting for 0ns before
> initiating a re-(connection) attempt with the master
> I1007 14:12:58.010920 28645 http.cpp:883] Using default 'basic' HTTP
> authenticator for realm 'mesos-agent-readonly'
> I1007 14:12:58.010948 28645 http.cpp:883] Using default 'basic' HTTP
> authenticator for realm 'mesos-agent-readwrite'
> I1007 14:12:58.011189 28645 slave.cpp:533] Agent resources: cpus(*):2;
> mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> I1007 14:12:58.011216 28645 slave.cpp:541] Agent attributes: [ ]
> I1007 14:12:58.011221 28645 slave.cpp:546] Agent hostname:
> ip-172-30-2-161.ec2.internal.mesosphere.io
> I1007 14:12:58.011471 28651 state.cpp:57] Recovering state from
> '/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_UtMqTW/meta'
> I1007 14:12:58.011592 28644 status_update_manager.cpp:203] Recovering status
> update manager
> I1007 14:12:58.011668 28644 slave.cpp:5256] Finished recovery
> I1007 14:12:58.011844 28644 slave.cpp:5428] Querying resource estimator for
> oversubscribable resources
> I1007 14:12:58.011942 28644 slave.cpp:915] New master detected at
> [email protected]:38776
> I1007 14:12:58.011958 28651 status_update_manager.cpp:177] Pausing sending
> status updates
> I1007 14:12:58.011960 28644 slave.cpp:974] Authenticating with master
> [email protected]:38776
> I1007 14:12:58.011991 28644 slave.cpp:985] Using default CRAM-MD5
> authenticatee
> I1007 14:12:58.012025 28644 slave.cpp:947] Detecting new master
> I1007 14:12:58.012051 28649 authenticatee.cpp:121] Creating new client SASL
> connection
> I1007 14:12:58.012137 28651 master.cpp:6634] Authenticating
> (401)@172.30.2.161:38776
> I1007 14:12:58.012226 28647 authenticator.cpp:414] Starting authentication
> session for crammd5-authenticatee(991)@172.30.2.161:38776
> I1007 14:12:58.012310 28651 authenticator.cpp:98] Creating new server SASL
> connection
> I1007 14:12:58.012409 28651 authenticatee.cpp:213] Received SASL
> authentication mechanisms: CRAM-MD5
> I1007 14:12:58.012428 28651 authenticatee.cpp:239] Attempting to authenticate
> with mechanism 'CRAM-MD5'
> I1007 14:12:58.012471 28651 authenticator.cpp:204] Received SASL
> authentication start
> I1007 14:12:58.012507 28651 authenticator.cpp:326] Authentication requires
> more steps
> I1007 14:12:58.012538 28648 scheduler.cpp:353] Connected with the master at
> http://172.30.2.161:38776/master/api/v1/scheduler
> I1007 14:12:58.012552 28647 authenticatee.cpp:259] Received SASL
> authentication step
> I1007 14:12:58.012673 28651 authenticator.cpp:232] Received SASL
> authentication step
> I1007 14:12:58.012693 28651 auxprop.cpp:109] Request to lookup properties for
> user: 'test-principal' realm: 'ip-172-30-2-161.ec2.internal' server FQDN:
> 'ip-172-30-2-161.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
> I1007 14:12:58.012701 28651 auxprop.cpp:181] Looking up auxiliary property
> '*userPassword'
> I1007 14:12:58.012712 28651 auxprop.cpp:181] Looking up auxiliary property
> '*cmusaslsecretCRAM-MD5'
> I1007 14:12:58.012722 28651 auxprop.cpp:109] Request to lookup properties for
> user: 'test-principal' realm: 'ip-172-30-2-161.ec2.internal' server FQDN:
> 'ip-172-30-2-161.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false
> SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
> I1007 14:12:58.012729 28651 auxprop.cpp:131] Skipping auxiliary property
> '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I1007 14:12:58.012737 28651 auxprop.cpp:131] Skipping auxiliary property
> '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I1007 14:12:58.012749 28651 authenticator.cpp:318] Authentication success
> I1007 14:12:58.012830 28648 authenticatee.cpp:299] Authentication success
> I1007 14:12:58.012850 28645 master.cpp:6664] Successfully authenticated
> principal 'test-principal' at (401)@172.30.2.161:38776
> I1007 14:12:58.012876 28646 authenticator.cpp:432] Authentication session
> cleanup for crammd5-authenticatee(991)@172.30.2.161:38776
> I1007 14:12:58.012972 28648 slave.cpp:1069] Successfully authenticated with
> master [email protected]:38776
> I1007 14:12:58.013017 28651 scheduler.cpp:235] Sending SUBSCRIBE call to
> http://172.30.2.161:38776/master/api/v1/scheduler
> I1007 14:12:58.013093 28648 slave.cpp:1475] Will retry registration in
> 4.168895ms if necessary
> I1007 14:12:58.013185 28649 master.cpp:5074] Registering agent at
> (401)@172.30.2.161:38776 (ip-172-30-2-161.ec2.internal.mesosphere.io) with id
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0
> I1007 14:12:58.013326 28649 process.cpp:3377] Handling HTTP event for process
> 'master' with path: '/master/api/v1/scheduler'
> I1007 14:12:58.013386 28644 registrar.cpp:461] Applied 1 operations in
> 14823ns; attempting to update the registry
> I1007 14:12:58.013556 28651 log.cpp:577] Attempting to append 434 bytes to
> the log
> I1007 14:12:58.013615 28651 coordinator.cpp:348] Coordinator attempting to
> write APPEND action at position 3
> I1007 14:12:58.013741 28650 http.cpp:382] HTTP POST for
> /master/api/v1/scheduler from 172.30.2.161:48518
> I1007 14:12:58.013804 28650 master.cpp:2309] Received subscription request
> for HTTP framework 'default'
> I1007 14:12:58.013823 28650 master.cpp:2049] Authorizing framework principal
> 'test-principal' to receive offers for role '*'
> I1007 14:12:58.013964 28648 master.cpp:2407] Subscribing framework 'default'
> with checkpointing disabled and capabilities [ ]
> I1007 14:12:58.013985 28649 replica.cpp:537] Replica received write request
> for position 3 from __req_res__(6241)@172.30.2.161:38776
> I1007 14:12:58.014142 28648 master.hpp:2162] Sending heartbeat to
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.014166 28644 hierarchical.cpp:275] Added framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.014189 28644 hierarchical.cpp:1694] No allocations performed
> I1007 14:12:58.014200 28644 hierarchical.cpp:1789] No inverse offers to send
> out!
> I1007 14:12:58.014214 28644 hierarchical.cpp:1286] Performed allocation for 0
> agents in 31808ns
> I1007 14:12:58.014370 28645 scheduler.cpp:671] Enqueuing event SUBSCRIBED
> received from http://172.30.2.161:38776/master/api/v1/scheduler
> I1007 14:12:58.014456 28649 leveldb.cpp:341] Persisting action (453 bytes) to
> leveldb took 448489ns
> I1007 14:12:58.014470 28649 replica.cpp:708] Persisted action APPEND at
> position 3
> I1007 14:12:58.014523 28648 scheduler.cpp:671] Enqueuing event HEARTBEAT
> received from http://172.30.2.161:38776/master/api/v1/scheduler
> I1007 14:12:58.014883 28645 replica.cpp:691] Replica received learned notice
> for position 3 from @0.0.0.0:0
> I1007 14:12:58.016865 28645 leveldb.cpp:341] Persisting action (455 bytes) to
> leveldb took 1.963645ms
> I1007 14:12:58.016880 28645 replica.cpp:708] Persisted action APPEND at
> position 3
> I1007 14:12:58.017127 28644 registrar.cpp:506] Successfully updated the
> registry in 3.705088ms
> I1007 14:12:58.017184 28648 log.cpp:596] Attempting to truncate the log to 3
> I1007 14:12:58.017242 28651 coordinator.cpp:348] Coordinator attempting to
> write TRUNCATE action at position 4
> I1007 14:12:58.017367 28645 master.cpp:5145] Registered agent
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 at (401)@172.30.2.161:38776
> (ip-172-30-2-161.ec2.internal.mesosphere.io) with cpus(*):2; mem(*):1024;
> disk(*):1024; ports(*):[31000-32000]
> I1007 14:12:58.017423 28644 slave.cpp:4108] Received ping from
> slave-observer(454)@172.30.2.161:38776
> I1007 14:12:58.017436 28650 hierarchical.cpp:485] Added agent
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0
> (ip-172-30-2-161.ec2.internal.mesosphere.io) with cpus(*):2; mem(*):1024;
> disk(*):1024; ports(*):[31000-32000] (allocated: {})
> I1007 14:12:58.017483 28644 slave.cpp:1115] Registered with master
> [email protected]:38776; given agent ID
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0
> I1007 14:12:58.017501 28644 fetcher.cpp:86] Clearing fetcher cache
> I1007 14:12:58.017575 28651 status_update_manager.cpp:184] Resuming sending
> status updates
> I1007 14:12:58.017680 28650 hierarchical.cpp:1789] No inverse offers to send
> out!
> I1007 14:12:58.017709 28650 hierarchical.cpp:1309] Performed allocation for
> agent 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 in 246743ns
> I1007 14:12:58.017750 28644 slave.cpp:1138] Checkpointing SlaveInfo to
> '/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_UtMqTW/meta/slaves/76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0/slave.info'
> I1007 14:12:58.017774 28645 master.cpp:6463] Sending 1 offers to framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 (default)
> I1007 14:12:58.017982 28644 replica.cpp:537] Replica received write request
> for position 4 from __req_res__(6242)@172.30.2.161:38776
> I1007 14:12:58.018200 28645 scheduler.cpp:671] Enqueuing event OFFERS
> received from http://172.30.2.161:38776/master/api/v1/scheduler
> I1007 14:12:58.018880 28646 scheduler.cpp:235] Sending ACCEPT call to
> http://172.30.2.161:38776/master/api/v1/scheduler
> I1007 14:12:58.019086 28644 leveldb.cpp:341] Persisting action (16 bytes) to
> leveldb took 1.082078ms
> I1007 14:12:58.019104 28644 replica.cpp:708] Persisted action TRUNCATE at
> position 4
> I1007 14:12:58.019212 28645 process.cpp:3377] Handling HTTP event for process
> 'master' with path: '/master/api/v1/scheduler'
> I1007 14:12:58.019390 28645 replica.cpp:691] Replica received learned notice
> for position 4 from @0.0.0.0:0
> I1007 14:12:58.019577 28647 http.cpp:382] HTTP POST for
> /master/api/v1/scheduler from 172.30.2.161:48516
> I1007 14:12:58.019711 28647 master.cpp:3521] Processing ACCEPT call for
> offers: [ 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-O0 ] on agent
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 at (401)@172.30.2.161:38776
> (ip-172-30-2-161.ec2.internal.mesosphere.io) for framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 (default)
> I1007 14:12:58.019737 28647 master.cpp:3143] Authorizing framework principal
> 'test-principal' to launch task ed09b29b-d149-46c1-a31d-f8fe3fdf214e
> I1007 14:12:58.019785 28647 master.cpp:3143] Authorizing framework principal
> 'test-principal' to launch task 9ce218ba-84b7-4ea4-a560-b30472f4a22c
> I1007 14:12:58.020279 28647 master.cpp:8214] Adding task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e with resources cpus(*):0.1; mem(*):32;
> disk(*):32 on agent 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0
> (ip-172-30-2-161.ec2.internal.mesosphere.io)
> I1007 14:12:58.020342 28647 master.cpp:8214] Adding task
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c with resources cpus(*):0.1; mem(*):32;
> disk(*):32 on agent 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0
> (ip-172-30-2-161.ec2.internal.mesosphere.io)
> I1007 14:12:58.020375 28647 master.cpp:4358] Launching task group {
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c, ed09b29b-d149-46c1-a31d-f8fe3fdf214e }
> of framework 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 (default) with
> resources cpus(*):0.2; mem(*):64; disk(*):64 on agent
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 at (401)@172.30.2.161:38776
> (ip-172-30-2-161.ec2.internal.mesosphere.io)
> I1007 14:12:58.020480 28648 slave.cpp:1539] Got assigned task group
> containing tasks [ ed09b29b-d149-46c1-a31d-f8fe3fdf214e,
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c ] for framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.020515 28650 hierarchical.cpp:1018] Recovered cpus(*):1.8;
> mem(*):960; disk(*):960; ports(*):[31000-32000] (total: cpus(*):2;
> mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):0.2;
> mem(*):64; disk(*):64) on agent 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 from
> framework 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.020541 28650 hierarchical.cpp:1055] Framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 filtered agent
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 for 5secs
> I1007 14:12:58.020830 28644 scheduler.cpp:235] Sending KILL call to
> http://172.30.2.161:38776/master/api/v1/scheduler
> I1007 14:12:58.021075 28651 process.cpp:3377] Handling HTTP event for process
> 'master' with path: '/master/api/v1/scheduler'
> I1007 14:12:58.021347 28648 http.cpp:382] HTTP POST for
> /master/api/v1/scheduler from 172.30.2.161:48516
> I1007 14:12:58.021399 28648 master.cpp:4682] Telling agent
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 at (401)@172.30.2.161:38776
> (ip-172-30-2-161.ec2.internal.mesosphere.io) to kill task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 (default)
> I1007 14:12:58.021473 28646 slave.cpp:2283] Asked to kill task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> W1007 14:12:58.021491 28646 slave.cpp:2324] Killing task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 before it was launched
> I1007 14:12:58.021483 28645 leveldb.cpp:341] Persisting action (18 bytes) to
> leveldb took 2.0668ms
> I1007 14:12:58.021524 28645 leveldb.cpp:399] Deleting ~2 keys from leveldb
> took 17427ns
> I1007 14:12:58.021539 28645 replica.cpp:708] Persisted action TRUNCATE at
> position 4
> I1007 14:12:58.021720 28630 slave.cpp:1696] Launching task group containing
> tasks [ ed09b29b-d149-46c1-a31d-f8fe3fdf214e,
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c ] for framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> W1007 14:12:58.021749 28630 slave.cpp:1736] Ignoring running task group
> containing tasks [ ed09b29b-d149-46c1-a31d-f8fe3fdf214e,
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c ] of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 because it has been killed in the
> meantime
> I1007 14:12:58.021777 28630 slave.cpp:3609] Handling status update
> TASK_KILLED (UUID: b28bbec8-5746-4f3a-bf53-2e4f1a78b4cb) for task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 from @0.0.0.0:0
> W1007 14:12:58.021800 28630 slave.cpp:3705] Could not find the executor for
> status update TASK_KILLED (UUID: b28bbec8-5746-4f3a-bf53-2e4f1a78b4cb) for
> task ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.021888 28630 slave.cpp:3609] Handling status update
> TASK_KILLED (UUID: a768cdc6-4d09-467c-8a66-f0117744a8dd) for task
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 from @0.0.0.0:0
> I1007 14:12:58.021890 28647 status_update_manager.cpp:323] Received status
> update TASK_KILLED (UUID: b28bbec8-5746-4f3a-bf53-2e4f1a78b4cb) for task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.021919 28647 status_update_manager.cpp:500] Creating
> StatusUpdate stream for task ed09b29b-d149-46c1-a31d-f8fe3fdf214e of
> framework 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> W1007 14:12:58.021913 28630 slave.cpp:3705] Could not find the executor for
> status update TASK_KILLED (UUID: a768cdc6-4d09-467c-8a66-f0117744a8dd) for
> task 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.021981 28630 slave.cpp:4709] Cleaning up framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.021986 28647 status_update_manager.cpp:377] Forwarding update
> TASK_KILLED (UUID: b28bbec8-5746-4f3a-bf53-2e4f1a78b4cb) for task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 to the agent
> E1007 14:12:58.022033 28630 slave.cpp:5408] Failed to find the mtime of
> '/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_UtMqTW/slaves/76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0/frameworks/76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000':
> Error invoking stat for
> '/mnt/teamcity/temp/buildTmp/SlaveTest_KillTaskGroupBetweenRunTaskParts_UtMqTW/slaves/76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0/frameworks/76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000':
> No such file or directory
> I1007 14:12:58.022058 28651 slave.cpp:4026] Forwarding the update TASK_KILLED
> (UUID: b28bbec8-5746-4f3a-bf53-2e4f1a78b4cb) for task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 to [email protected]:38776
> I1007 14:12:58.022063 28647 status_update_manager.cpp:323] Received status
> update TASK_KILLED (UUID: a768cdc6-4d09-467c-8a66-f0117744a8dd) for task
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022099 28647 status_update_manager.cpp:500] Creating
> StatusUpdate stream for task 9ce218ba-84b7-4ea4-a560-b30472f4a22c of
> framework 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022140 28651 slave.cpp:3920] Status update manager
> successfully handled status update TASK_KILLED (UUID:
> b28bbec8-5746-4f3a-bf53-2e4f1a78b4cb) for task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022178 28647 status_update_manager.cpp:377] Forwarding update
> TASK_KILLED (UUID: a768cdc6-4d09-467c-8a66-f0117744a8dd) for task
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 to the agent
> I1007 14:12:58.022231 28646 master.cpp:5672] Status update TASK_KILLED (UUID:
> b28bbec8-5746-4f3a-bf53-2e4f1a78b4cb) for task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 from agent
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 at (401)@172.30.2.161:38776
> (ip-172-30-2-161.ec2.internal.mesosphere.io)
> I1007 14:12:58.022239 28651 slave.cpp:4026] Forwarding the update TASK_KILLED
> (UUID: a768cdc6-4d09-467c-8a66-f0117744a8dd) for task
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 to [email protected]:38776
> I1007 14:12:58.022256 28646 master.cpp:5734] Forwarding status update
> TASK_KILLED (UUID: b28bbec8-5746-4f3a-bf53-2e4f1a78b4cb) for task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022279 28647 status_update_manager.cpp:285] Closing status
> update streams for framework 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022294 28647 status_update_manager.cpp:531] Cleaning up status
> update stream for task 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022301 28651 slave.cpp:3920] Status update manager
> successfully handled status update TASK_KILLED (UUID:
> a768cdc6-4d09-467c-8a66-f0117744a8dd) for task
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022330 28647 status_update_manager.cpp:531] Cleaning up status
> update stream for task ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022347 28646 master.cpp:7592] Updating the state of task
> ed09b29b-d149-46c1-a31d-f8fe3fdf214e of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 (latest state: TASK_KILLED, status
> update state: TASK_KILLED)
> I1007 14:12:58.022449 28646 master.cpp:5672] Status update TASK_KILLED (UUID:
> a768cdc6-4d09-467c-8a66-f0117744a8dd) for task
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 from agent
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 at (401)@172.30.2.161:38776
> (ip-172-30-2-161.ec2.internal.mesosphere.io)
> I1007 14:12:58.022464 28646 master.cpp:5734] Forwarding status update
> TASK_KILLED (UUID: a768cdc6-4d09-467c-8a66-f0117744a8dd) for task
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022461 28644 hierarchical.cpp:1018] Recovered cpus(*):0.1;
> mem(*):32; disk(*):32 (total: cpus(*):2; mem(*):1024; disk(*):1024;
> ports(*):[31000-32000], allocated: cpus(*):0.1; mem(*):32; disk(*):32) on
> agent 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 from framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022600 28650 scheduler.cpp:671] Enqueuing event UPDATE
> received from http://172.30.2.161:38776/master/api/v1/scheduler
> I1007 14:12:58.022609 28646 master.cpp:7592] Updating the state of task
> 9ce218ba-84b7-4ea4-a560-b30472f4a22c of framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000 (latest state: TASK_KILLED, status
> update state: TASK_KILLED)
> I1007 14:12:58.022825 28644 hierarchical.cpp:1018] Recovered cpus(*):0.1;
> mem(*):32; disk(*):32 (total: cpus(*):2; mem(*):1024; disk(*):1024;
> ports(*):[31000-32000], allocated: {}) on agent
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-S0 from framework
> 76d4d55f-dcc6-4033-85d9-7ec97ef353cb-0000
> I1007 14:12:58.022981 28645 scheduler.cpp:671] Enqueuing event UPDATE
> received from http://172.30.2.161:38776/master/api/v1/scheduler
> I1007 14:12:58.023365 28651 slave.cpp:787] Agent terminating
> *** Aborted at 1475849578 (unix time) try "date -d @1475849578" if you are
> using GNU date ***
> PC: @ 0x7fdbf26dc95f mesos::internal::slave::Slave::finalize()
> *** SIGSEGV (@0x88) received by PID 28630 (TID 0x7fdbe5910700) from PID 136;
> stack trace: ***
> @ 0x7fdbdef6fbb2 (unknown)
> @ 0x7fdbdef74009 (unknown)
> @ 0x7fdbdef67f08 (unknown)
> @ 0x7fdbf14bc9f0 (unknown)
> @ 0x7fdbf26dc95f mesos::internal::slave::Slave::finalize()
> @ 0x7fdbf2ec461b process::ProcessManager::resume()
> @ 0x7fdbf2ec7cb7
> _ZNSt6thread5_ImplISt12_Bind_simpleIFZN7process14ProcessManager12init_threadsEvEUt_vEEE6_M_runEv
> @ 0x7fdbf0dd6f20 (unknown)
> @ 0x7fdbf14b360a start_thread
> @ 0x7fdbf054659d __clone
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)