[
https://issues.apache.org/jira/browse/MESOS-5846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376146#comment-15376146
]
haosdent commented on MESOS-5846:
---------------------------------
It is because we didn't await for {{Slave::_statusUpdateAcknowledgement}} which
would complete task.
> AgentAPITest.GetState is flaky
> ------------------------------
>
> Key: MESOS-5846
> URL: https://issues.apache.org/jira/browse/MESOS-5846
> Project: Mesos
> Issue Type: Bug
> Reporter: haosdent
> Assignee: haosdent
>
> {code}
> [ RUN ] ContentType/AgentAPITest.GetState/0
> I0713 23:08:15.086659 29194 cluster.cpp:155] Creating default 'local'
> authorizer
> I0713 23:08:15.089359 29194 leveldb.cpp:174] Opened db in 2.226323ms
> I0713 23:08:15.090400 29194 leveldb.cpp:181] Compacted db in 1.006302ms
> I0713 23:08:15.090451 29194 leveldb.cpp:196] Created db iterator in 21868ns
> I0713 23:08:15.090464 29194 leveldb.cpp:202] Seeked to beginning of db in
> 2156ns
> I0713 23:08:15.090476 29194 leveldb.cpp:271] Iterated through 0 keys in the
> db in 282ns
> I0713 23:08:15.090517 29194 replica.cpp:779] Replica recovered with log
> positions 0 -> 0 with 1 holes and 0 unlearned
> I0713 23:08:15.091065 29227 recover.cpp:451] Starting replica recovery
> I0713 23:08:15.091529 29228 recover.cpp:477] Replica is in EMPTY status
> I0713 23:08:15.093382 29213 replica.cpp:673] Replica in EMPTY status received
> a broadcasted recover request from (17450)@172.17.0.1:60353
> I0713 23:08:15.093865 29214 recover.cpp:197] Received a recover response from
> a replica in EMPTY status
> I0713 23:08:15.094329 29224 recover.cpp:568] Updating replica status to
> STARTING
> I0713 23:08:15.094871 29220 master.cpp:382] Master
> 9c60155f-827f-467b-b942-140175e8e762 (eb7794503d5d) started on
> 172.17.0.1:60353
> I0713 23:08:15.094894 29220 master.cpp:384] Flags at startup: --acls=""
> --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins"
> --allocation_interval="1secs" --allocator="HierarchicalDRF"
> --authenticate_agents="true" --authenticate_frameworks="true"
> --authenticate_http="true" --authenticate_http_frameworks="true"
> --authenticators="crammd5" --authorizers="local"
> --credentials="/tmp/qrC2wH/credentials" --framework_sorter="drf"
> --help="false" --hostname_lookup="true" --http_authenticators="basic"
> --http_framework_authenticators="basic" --initialize_driver_logging="true"
> --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO"
> --max_agent_ping_timeouts="5" --max_completed_frameworks="50"
> --max_completed_tasks_per_framework="1000" --quiet="false"
> --recovery_agent_removal_limit="100%" --registry="replicated_log"
> --registry_fetch_timeout="1mins" --registry_store_timeout="100secs"
> --registry_strict="true" --root_submissions="true" --user_sorter="drf"
> --version="false" --webui_dir="/mesos/mesos-1.1.0/_inst/share/mesos/webui"
> --work_dir="/tmp/qrC2wH/master" --zk_session_timeout="10secs"
> I0713 23:08:15.095330 29220 master.cpp:434] Master only allowing
> authenticated frameworks to register
> I0713 23:08:15.095351 29220 master.cpp:448] Master only allowing
> authenticated agents to register
> I0713 23:08:15.095360 29220 master.cpp:461] Master only allowing
> authenticated HTTP frameworks to register
> I0713 23:08:15.095372 29220 credentials.hpp:37] Loading credentials for
> authentication from '/tmp/qrC2wH/credentials'
> I0713 23:08:15.095726 29220 master.cpp:506] Using default 'crammd5'
> authenticator
> I0713 23:08:15.095860 29227 leveldb.cpp:304] Persisting metadata (8 bytes) to
> leveldb took 1.418237ms
> I0713 23:08:15.095895 29227 replica.cpp:320] Persisted replica status to
> STARTING
> I0713 23:08:15.096101 29220 master.cpp:578] Using default 'basic' HTTP
> authenticator
> I0713 23:08:15.096176 29216 recover.cpp:477] Replica is in STARTING status
> I0713 23:08:15.096374 29220 master.cpp:658] Using default 'basic' HTTP
> framework authenticator
> I0713 23:08:15.096513 29220 master.cpp:705] Authorization enabled
> I0713 23:08:15.097069 29216 hierarchical.cpp:151] Initialized hierarchical
> allocator process
> I0713 23:08:15.097076 29222 whitelist_watcher.cpp:77] No whitelist given
> I0713 23:08:15.097703 29228 replica.cpp:673] Replica in STARTING status
> received a broadcasted recover request from (17453)@172.17.0.1:60353
> I0713 23:08:15.098477 29213 recover.cpp:197] Received a recover response from
> a replica in STARTING status
> I0713 23:08:15.099001 29218 recover.cpp:568] Updating replica status to VOTING
> I0713 23:08:15.099570 29213 leveldb.cpp:304] Persisting metadata (8 bytes) to
> leveldb took 431040ns
> I0713 23:08:15.099871 29213 replica.cpp:320] Persisted replica status to
> VOTING
> I0713 23:08:15.100307 29219 recover.cpp:582] Successfully joined the Paxos
> group
> I0713 23:08:15.100581 29219 recover.cpp:466] Recover process terminated
> I0713 23:08:15.102016 29221 master.cpp:1973] The newly elected leader is
> [email protected]:60353 with id 9c60155f-827f-467b-b942-140175e8e762
> I0713 23:08:15.102056 29221 master.cpp:1986] Elected as the leading master!
> I0713 23:08:15.102084 29221 master.cpp:1673] Recovering from registrar
> I0713 23:08:15.102402 29222 registrar.cpp:332] Recovering registrar
> I0713 23:08:15.103243 29225 log.cpp:553] Attempting to start the writer
> I0713 23:08:15.104910 29214 replica.cpp:493] Replica received implicit
> promise request from (17454)@172.17.0.1:60353 with proposal 1
> I0713 23:08:15.105387 29214 leveldb.cpp:304] Persisting metadata (8 bytes) to
> leveldb took 424771ns
> I0713 23:08:15.105417 29214 replica.cpp:342] Persisted promised to 1
> I0713 23:08:15.106231 29214 coordinator.cpp:238] Coordinator attempting to
> fill missing positions
> I0713 23:08:15.107729 29215 replica.cpp:388] Replica received explicit
> promise request from (17455)@172.17.0.1:60353 for position 0 with proposal 2
> I0713 23:08:15.108275 29215 leveldb.cpp:341] Persisting action (8 bytes) to
> leveldb took 488626ns
> I0713 23:08:15.108304 29215 replica.cpp:712] Persisted action at 0
> I0713 23:08:15.109632 29215 replica.cpp:537] Replica received write request
> for position 0 from (17456)@172.17.0.1:60353
> I0713 23:08:15.109717 29215 leveldb.cpp:436] Reading position from leveldb
> took 43865ns
> I0713 23:08:15.110244 29215 leveldb.cpp:341] Persisting action (14 bytes) to
> leveldb took 470370ns
> I0713 23:08:15.110273 29215 replica.cpp:712] Persisted action at 0
> I0713 23:08:15.111032 29213 replica.cpp:691] Replica received learned notice
> for position 0 from @0.0.0.0:0
> I0713 23:08:15.111372 29213 leveldb.cpp:341] Persisting action (16 bytes) to
> leveldb took 306210ns
> I0713 23:08:15.111399 29213 replica.cpp:712] Persisted action at 0
> I0713 23:08:15.111431 29213 replica.cpp:697] Replica learned NOP action at
> position 0
> I0713 23:08:15.112262 29226 log.cpp:569] Writer started with ending position 0
> I0713 23:08:15.113658 29221 leveldb.cpp:436] Reading position from leveldb
> took 66478ns
> I0713 23:08:15.115200 29216 registrar.cpp:365] Successfully fetched the
> registry (0B) in 12.733952ms
> I0713 23:08:15.115345 29216 registrar.cpp:464] Applied 1 operations in
> 32558ns; attempting to update the 'registry'
> I0713 23:08:15.116132 29226 log.cpp:577] Attempting to append 168 bytes to
> the log
> I0713 23:08:15.116292 29225 coordinator.cpp:348] Coordinator attempting to
> write APPEND action at position 1
> I0713 23:08:15.117187 29223 replica.cpp:537] Replica received write request
> for position 1 from (17457)@172.17.0.1:60353
> I0713 23:08:15.117698 29223 leveldb.cpp:341] Persisting action (187 bytes) to
> leveldb took 463187ns
> I0713 23:08:15.117727 29223 replica.cpp:712] Persisted action at 1
> I0713 23:08:15.118455 29226 replica.cpp:691] Replica received learned notice
> for position 1 from @0.0.0.0:0
> I0713 23:08:15.118860 29226 leveldb.cpp:341] Persisting action (189 bytes) to
> leveldb took 374465ns
> I0713 23:08:15.118886 29226 replica.cpp:712] Persisted action at 1
> I0713 23:08:15.118912 29226 replica.cpp:697] Replica learned APPEND action at
> position 1
> I0713 23:08:15.120184 29216 registrar.cpp:509] Successfully updated the
> 'registry' in 4.75392ms
> I0713 23:08:15.120375 29216 registrar.cpp:395] Successfully recovered
> registrar
> I0713 23:08:15.120494 29226 log.cpp:596] Attempting to truncate the log to 1
> I0713 23:08:15.120668 29216 coordinator.cpp:348] Coordinator attempting to
> write TRUNCATE action at position 2
> I0713 23:08:15.120868 29226 master.cpp:1781] Recovered 0 agents from the
> Registry (129B) ; allowing 10mins for agents to re-register
> I0713 23:08:15.121124 29223 hierarchical.cpp:178] Skipping recovery of
> hierarchical allocator: nothing to recover
> I0713 23:08:15.121780 29214 replica.cpp:537] Replica received write request
> for position 2 from (17458)@172.17.0.1:60353
> I0713 23:08:15.122412 29214 leveldb.cpp:341] Persisting action (16 bytes) to
> leveldb took 420592ns
> I0713 23:08:15.122576 29214 replica.cpp:712] Persisted action at 2
> I0713 23:08:15.123929 29215 replica.cpp:691] Replica received learned notice
> for position 2 from @0.0.0.0:0
> I0713 23:08:15.124474 29215 leveldb.cpp:341] Persisting action (18 bytes) to
> leveldb took 336698ns
> I0713 23:08:15.124661 29215 leveldb.cpp:399] Deleting ~1 keys from leveldb
> took 33640ns
> I0713 23:08:15.124810 29215 replica.cpp:712] Persisted action at 2
> I0713 23:08:15.124977 29215 replica.cpp:697] Replica learned TRUNCATE action
> at position 2
> I0713 23:08:15.137661 29194 containerizer.cpp:196] Using isolation:
> posix/cpu,posix/mem,filesystem/posix,network/cni
> W0713 23:08:15.138980 29194 backend.cpp:75] Failed to create 'aufs' backend:
> AufsBackend requires root privileges, but is running as user mesos
> W0713 23:08:15.139297 29194 backend.cpp:75] Failed to create 'bind' backend:
> BindBackend requires root privileges
> I0713 23:08:15.141397 29194 cluster.cpp:432] Creating default 'local'
> authorizer
> I0713 23:08:15.143661 29227 slave.cpp:205] Agent started on
> 481)@172.17.0.1:60353
> I0713 23:08:15.143834 29227 slave.cpp:206] Flags at startup: --acls=""
> --appc_simple_discovery_uri_prefix="http://"
> --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http="true"
> --authenticatee="crammd5" --authentication_backoff_factor="1secs"
> --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false"
> --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup"
> --cgroups_limit_swap="false" --cgroups_root="mesos"
> --container_disk_watch_interval="15secs" --containerizers="mesos"
> --credential="/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/credential"
> --default_role="*" --disk_watch_interval="1mins" --docker="docker"
> --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io"
> --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock"
> --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker"
> --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume"
> --enforce_container_disk_quota="false"
> --executor_registration_timeout="1mins"
> --executor_shutdown_grace_period="5secs"
> --fetcher_cache_dir="/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/fetch"
> --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks"
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false"
> --hostname_lookup="true" --http_authenticators="basic"
> --http_command_executor="false"
> --http_credentials="/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/http_credentials"
> --image_provisioner_backend="copy" --initialize_driver_logging="true"
> --isolation="posix/cpu,posix/mem"
> --launcher_dir="/mesos/mesos-1.1.0/_build/src" --logbufsecs="0"
> --logging_level="INFO" --oversubscribed_resources_interval="15secs"
> --perf_duration="10secs" --perf_interval="1mins"
> --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect"
> --recovery_timeout="15mins" --registration_backoff_factor="10ms"
> --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"
> --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox"
> --strict="true" --switch_user="true" --systemd_enable_support="true"
> --systemd_runtime_directory="/run/systemd/system" --version="false"
> --work_dir="/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n"
> I0713 23:08:15.144655 29227 credentials.hpp:86] Loading credential for
> authentication from
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/credential'
> I0713 23:08:15.145030 29227 slave.cpp:343] Agent using credential for:
> test-principal
> I0713 23:08:15.145193 29227 credentials.hpp:37] Loading credentials for
> authentication from
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/http_credentials'
> I0713 23:08:15.145596 29227 slave.cpp:395] Using default 'basic' HTTP
> authenticator
> I0713 23:08:15.146023 29227 resources.cpp:572] Parsing resources as JSON
> failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]
> Trying semicolon-delimited string format instead
> I0713 23:08:15.146553 29227 resources.cpp:572] Parsing resources as JSON
> failed: cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]
> Trying semicolon-delimited string format instead
> I0713 23:08:15.147155 29227 slave.cpp:594] Agent resources: cpus(*):2;
> mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> I0713 23:08:15.147367 29227 slave.cpp:602] Agent attributes: [ ]
> I0713 23:08:15.147531 29227 slave.cpp:607] Agent hostname: eb7794503d5d
> I0713 23:08:15.149729 29227 state.cpp:57] Recovering state from
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/meta'
> I0713 23:08:15.150965 29215 status_update_manager.cpp:200] Recovering status
> update manager
> I0713 23:08:15.151168 29215 containerizer.cpp:522] Recovering containerizer
> I0713 23:08:15.153110 29225 provisioner.cpp:253] Provisioner recovery complete
> I0713 23:08:15.154103 29217 slave.cpp:4856] Finished recovery
> I0713 23:08:15.154707 29217 slave.cpp:5028] Querying resource estimator for
> oversubscribable resources
> I0713 23:08:15.155293 29217 slave.cpp:969] New master detected at
> [email protected]:60353
> I0713 23:08:15.155325 29217 slave.cpp:1028] Authenticating with master
> [email protected]:60353
> I0713 23:08:15.155406 29217 slave.cpp:1039] Using default CRAM-MD5
> authenticatee
> I0713 23:08:15.155585 29217 slave.cpp:1001] Detecting new master
> I0713 23:08:15.155700 29217 slave.cpp:5042] Received oversubscribable
> resources from the resource estimator
> I0713 23:08:15.155797 29217 status_update_manager.cpp:174] Pausing sending
> status updates
> I0713 23:08:15.155957 29217 authenticatee.cpp:121] Creating new client SASL
> connection
> I0713 23:08:15.156401 29217 master.cpp:6006] Authenticating
> slave(481)@172.17.0.1:60353
> I0713 23:08:15.156595 29217 authenticator.cpp:414] Starting authentication
> session for crammd5_authenticatee(949)@172.17.0.1:60353
> I0713 23:08:15.156955 29217 authenticator.cpp:98] Creating new server SASL
> connection
> I0713 23:08:15.157183 29217 authenticatee.cpp:213] Received SASL
> authentication mechanisms: CRAM-MD5
> I0713 23:08:15.157214 29217 authenticatee.cpp:239] Attempting to authenticate
> with mechanism 'CRAM-MD5'
> I0713 23:08:15.157305 29217 authenticator.cpp:204] Received SASL
> authentication start
> I0713 23:08:15.157372 29217 authenticator.cpp:326] Authentication requires
> more steps
> I0713 23:08:15.157456 29217 authenticatee.cpp:259] Received SASL
> authentication step
> I0713 23:08:15.157554 29217 authenticator.cpp:232] Received SASL
> authentication step
> I0713 23:08:15.157590 29217 auxprop.cpp:109] Request to lookup properties for
> user: 'test-principal' realm: 'eb7794503d5d' server FQDN: 'eb7794503d5d'
> SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false
> SASL_AUXPROP_AUTHZID: false
> I0713 23:08:15.157608 29217 auxprop.cpp:181] Looking up auxiliary property
> '*userPassword'
> I0713 23:08:15.157651 29217 auxprop.cpp:181] Looking up auxiliary property
> '*cmusaslsecretCRAM-MD5'
> I0713 23:08:15.157677 29217 auxprop.cpp:109] Request to lookup properties for
> user: 'test-principal' realm: 'eb7794503d5d' server FQDN: 'eb7794503d5d'
> SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false
> SASL_AUXPROP_AUTHZID: true
> I0713 23:08:15.157691 29217 auxprop.cpp:131] Skipping auxiliary property
> '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I0713 23:08:15.157701 29217 auxprop.cpp:131] Skipping auxiliary property
> '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I0713 23:08:15.157718 29217 authenticator.cpp:318] Authentication success
> I0713 23:08:15.157902 29220 authenticatee.cpp:299] Authentication success
> I0713 23:08:15.157938 29228 master.cpp:6036] Successfully authenticated
> principal 'test-principal' at slave(481)@172.17.0.1:60353
> I0713 23:08:15.158188 29217 authenticator.cpp:432] Authentication session
> cleanup for crammd5_authenticatee(949)@172.17.0.1:60353
> I0713 23:08:15.158593 29220 slave.cpp:1123] Successfully authenticated with
> master [email protected]:60353
> I0713 23:08:15.159567 29224 master.cpp:4676] Registering agent at
> slave(481)@172.17.0.1:60353 (eb7794503d5d) with id
> 9c60155f-827f-467b-b942-140175e8e762-S0
> I0713 23:08:15.159747 29220 slave.cpp:1529] Will retry registration in
> 13.751524ms if necessary
> I0713 23:08:15.160589 29194 sched.cpp:226] Version: 1.1.0
> I0713 23:08:15.161620 29219 sched.cpp:330] New master detected at
> [email protected]:60353
> I0713 23:08:15.161705 29219 sched.cpp:396] Authenticating with master
> [email protected]:60353
> I0713 23:08:15.161727 29219 sched.cpp:403] Using default CRAM-MD5
> authenticatee
> I0713 23:08:15.162070 29219 authenticatee.cpp:121] Creating new client SASL
> connection
> I0713 23:08:15.162706 29224 master.cpp:6006] Authenticating
> [email protected]:60353
> I0713 23:08:15.162866 29219 authenticator.cpp:414] Starting authentication
> session for crammd5_authenticatee(950)@172.17.0.1:60353
> I0713 23:08:15.163204 29220 registrar.cpp:464] Applied 1 operations in
> 75434ns; attempting to update the 'registry'
> I0713 23:08:15.163246 29219 authenticator.cpp:98] Creating new server SASL
> connection
> I0713 23:08:15.163471 29219 authenticatee.cpp:213] Received SASL
> authentication mechanisms: CRAM-MD5
> I0713 23:08:15.163498 29219 authenticatee.cpp:239] Attempting to authenticate
> with mechanism 'CRAM-MD5'
> I0713 23:08:15.164022 29220 authenticator.cpp:204] Received SASL
> authentication start
> I0713 23:08:15.164088 29220 authenticator.cpp:326] Authentication requires
> more steps
> I0713 23:08:15.164306 29219 authenticatee.cpp:259] Received SASL
> authentication step
> I0713 23:08:15.164407 29219 authenticator.cpp:232] Received SASL
> authentication step
> I0713 23:08:15.164440 29219 auxprop.cpp:109] Request to lookup properties for
> user: 'test-principal' realm: 'eb7794503d5d' server FQDN: 'eb7794503d5d'
> SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false
> SASL_AUXPROP_AUTHZID: false
> I0713 23:08:15.164443 29220 log.cpp:577] Attempting to append 337 bytes to
> the log
> I0713 23:08:15.164454 29219 auxprop.cpp:181] Looking up auxiliary property
> '*userPassword'
> I0713 23:08:15.164496 29219 auxprop.cpp:181] Looking up auxiliary property
> '*cmusaslsecretCRAM-MD5'
> I0713 23:08:15.164518 29219 auxprop.cpp:109] Request to lookup properties for
> user: 'test-principal' realm: 'eb7794503d5d' server FQDN: 'eb7794503d5d'
> SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false
> SASL_AUXPROP_AUTHZID: true
> I0713 23:08:15.164530 29219 auxprop.cpp:131] Skipping auxiliary property
> '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I0713 23:08:15.164540 29219 auxprop.cpp:131] Skipping auxiliary property
> '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I0713 23:08:15.164557 29219 authenticator.cpp:318] Authentication success
> I0713 23:08:15.164718 29219 authenticatee.cpp:299] Authentication success
> I0713 23:08:15.164867 29224 authenticator.cpp:432] Authentication session
> cleanup for crammd5_authenticatee(950)@172.17.0.1:60353
> I0713 23:08:15.165508 29219 master.cpp:6036] Successfully authenticated
> principal 'test-principal' at
> [email protected]:60353
> I0713 23:08:15.165163 29226 sched.cpp:502] Successfully authenticated with
> master [email protected]:60353
> I0713 23:08:15.165891 29226 sched.cpp:820] Sending SUBSCRIBE call to
> [email protected]:60353
> I0713 23:08:15.166131 29226 sched.cpp:853] Will retry registration in
> 1.538429432secs if necessary
> I0713 23:08:15.166456 29226 master.cpp:2550] Received SUBSCRIBE call for
> framework 'default' at
> [email protected]:60353
> I0713 23:08:15.166653 29226 master.cpp:2012] Authorizing framework principal
> 'test-principal' to receive offers for role '*'
> I0713 23:08:15.167270 29226 master.cpp:2626] Subscribing framework default
> with checkpointing disabled and capabilities [ ]
> I0713 23:08:15.168177 29224 hierarchical.cpp:271] Added framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.168391 29224 hierarchical.cpp:1537] No allocations performed
> I0713 23:08:15.168556 29224 hierarchical.cpp:1632] No inverse offers to send
> out!
> I0713 23:08:15.168756 29224 hierarchical.cpp:1172] Performed allocation for 0
> agents in 425897ns
> I0713 23:08:15.169368 29226 sched.cpp:743] Framework registered with
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.169564 29226 sched.cpp:757] Scheduler::registered took 28613ns
> I0713 23:08:15.164705 29220 coordinator.cpp:348] Coordinator attempting to
> write APPEND action at position 3
> I0713 23:08:15.171046 29223 replica.cpp:537] Replica received write request
> for position 3 from (17474)@172.17.0.1:60353
> I0713 23:08:15.171514 29223 leveldb.cpp:341] Persisting action (356 bytes) to
> leveldb took 271232ns
> I0713 23:08:15.171545 29223 replica.cpp:712] Persisted action at 3
> I0713 23:08:15.172586 29223 replica.cpp:691] Replica received learned notice
> for position 3 from @0.0.0.0:0
> I0713 23:08:15.173384 29223 leveldb.cpp:341] Persisting action (358 bytes) to
> leveldb took 769059ns
> I0713 23:08:15.173411 29223 replica.cpp:712] Persisted action at 3
> I0713 23:08:15.173435 29223 replica.cpp:697] Replica learned APPEND action at
> position 3
> I0713 23:08:15.175026 29228 slave.cpp:1529] Will retry registration in
> 679095ns if necessary
> I0713 23:08:15.175433 29228 master.cpp:4664] Ignoring register agent message
> from slave(481)@172.17.0.1:60353 (eb7794503d5d) as admission is already in
> progress
> I0713 23:08:15.175601 29221 registrar.cpp:509] Successfully updated the
> 'registry' in 12.320768ms
> I0713 23:08:15.175663 29222 log.cpp:596] Attempting to truncate the log to 3
> I0713 23:08:15.175879 29225 coordinator.cpp:348] Coordinator attempting to
> write TRUNCATE action at position 4
> I0713 23:08:15.176409 29221 slave.cpp:3760] Received ping from
> slave-observer(432)@172.17.0.1:60353
> I0713 23:08:15.176508 29222 master.cpp:4745] Registered agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d) with cpus(*):2; mem(*):1024; disk(*):1024;
> ports(*):[31000-32000]
> I0713 23:08:15.176748 29221 slave.cpp:1169] Registered with master
> [email protected]:60353; given agent ID
> 9c60155f-827f-467b-b942-140175e8e762-S0
> I0713 23:08:15.176914 29221 fetcher.cpp:86] Clearing fetcher cache
> I0713 23:08:15.176796 29222 hierarchical.cpp:478] Added agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 (eb7794503d5d) with cpus(*):2;
> mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: )
> I0713 23:08:15.177572 29221 slave.cpp:1192] Checkpointing SlaveInfo to
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/meta/slaves/9c60155f-827f-467b-b942-140175e8e762-S0/slave.info'
> I0713 23:08:15.177868 29225 status_update_manager.cpp:181] Resuming sending
> status updates
> I0713 23:08:15.177988 29222 hierarchical.cpp:1632] No inverse offers to send
> out!
> I0713 23:08:15.178059 29222 hierarchical.cpp:1195] Performed allocation for
> agent 9c60155f-827f-467b-b942-140175e8e762-S0 in 938932ns
> I0713 23:08:15.178381 29221 slave.cpp:1229] Forwarding total oversubscribed
> resources
> I0713 23:08:15.178478 29222 master.cpp:5835] Sending 1 offers to framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 (default) at
> [email protected]:60353
> I0713 23:08:15.179317 29222 master.cpp:5128] Received update of agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d) with total oversubscribed resources
> I0713 23:08:15.179505 29214 sched.cpp:917] Scheduler::resourceOffers took
> 385291ns
> I0713 23:08:15.179523 29221 replica.cpp:537] Replica received write request
> for position 4 from (17475)@172.17.0.1:60353
> I0713 23:08:15.179997 29222 hierarchical.cpp:542] Agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 (eb7794503d5d) updated with
> oversubscribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024;
> ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024;
> ports(*):[31000-32000])
> I0713 23:08:15.180148 29222 hierarchical.cpp:1537] No allocations performed
> I0713 23:08:15.180148 29221 leveldb.cpp:341] Persisting action (16 bytes) to
> leveldb took 587468ns
> I0713 23:08:15.180192 29221 replica.cpp:712] Persisted action at 4
> I0713 23:08:15.180197 29222 hierarchical.cpp:1632] No inverse offers to send
> out!
> I0713 23:08:15.180248 29222 hierarchical.cpp:1195] Performed allocation for
> agent 9c60155f-827f-467b-b942-140175e8e762-S0 in 196155ns
> I0713 23:08:15.181185 29216 replica.cpp:691] Replica received learned notice
> for position 4 from @0.0.0.0:0
> I0713 23:08:15.181629 29216 leveldb.cpp:341] Persisting action (18 bytes) to
> leveldb took 412707ns
> I0713 23:08:15.181695 29216 leveldb.cpp:399] Deleting ~2 keys from leveldb
> took 39299ns
> I0713 23:08:15.181718 29216 replica.cpp:712] Persisted action at 4
> I0713 23:08:15.181743 29216 replica.cpp:697] Replica learned TRUNCATE action
> at position 4
> I0713 23:08:15.184922 29226 process.cpp:3354] Handling HTTP event for process
> 'slave(481)' with path: '/slave(481)/api/v1'
> I0713 23:08:15.186085 29228 http.cpp:270] HTTP POST for /slave(481)/api/v1
> from 172.17.0.1:43558
> I0713 23:08:15.186141 29228 http.cpp:346] Processing call GET_STATE
> I0713 23:08:15.190523 29226 master.cpp:3468] Processing ACCEPT call for
> offers: [ 9c60155f-827f-467b-b942-140175e8e762-O0 ] on agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d) for framework 9c60155f-827f-467b-b942-140175e8e762-0000
> (default) at [email protected]:60353
> I0713 23:08:15.190613 29226 master.cpp:3106] Authorizing framework principal
> 'test-principal' to launch task 1
> I0713 23:08:15.192513 29219 master.cpp:7565] Adding task 1 with resources
> cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 (eb7794503d5d)
> I0713 23:08:15.192736 29219 master.cpp:3957] Launching task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 (default) at
> [email protected]:60353 with
> resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on
> agent 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d)
> I0713 23:08:15.193102 29226 slave.cpp:1569] Got assigned task 1 for framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.193370 29226 resources.cpp:572] Parsing resources as JSON
> failed: cpus:0.1;mem:32
> Trying semicolon-delimited string format instead
> I0713 23:08:15.194020 29226 slave.cpp:1688] Launching task 1 for framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.194113 29226 resources.cpp:572] Parsing resources as JSON
> failed: cpus:0.1;mem:32
> Trying semicolon-delimited string format instead
> I0713 23:08:15.194896 29226 paths.cpp:528] Trying to chown
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/slaves/9c60155f-827f-467b-b942-140175e8e762-S0/frameworks/9c60155f-827f-467b-b942-140175e8e762-0000/executors/1/runs/adb5caa4-ef88-4f13-aefa-a4486c281385'
> to user 'mesos'
> I0713 23:08:15.206804 29226 slave.cpp:5748] Launching executor 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 with resources cpus(*):0.1;
> mem(*):32 in work directory
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/slaves/9c60155f-827f-467b-b942-140175e8e762-S0/frameworks/9c60155f-827f-467b-b942-140175e8e762-0000/executors/1/runs/adb5caa4-ef88-4f13-aefa-a4486c281385'
> I0713 23:08:15.207491 29222 containerizer.cpp:781] Starting container
> 'adb5caa4-ef88-4f13-aefa-a4486c281385' for executor '1' of framework
> '9c60155f-827f-467b-b942-140175e8e762-0000'
> I0713 23:08:15.207515 29226 slave.cpp:1914] Queuing task '1' for executor '1'
> of framework 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.207655 29226 slave.cpp:922] Successfully attached file
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/slaves/9c60155f-827f-467b-b942-140175e8e762-S0/frameworks/9c60155f-827f-467b-b942-140175e8e762-0000/executors/1/runs/adb5caa4-ef88-4f13-aefa-a4486c281385'
> I0713 23:08:15.210811 29224 containerizer.cpp:1284] Launching
> 'mesos-containerizer' with flags
> '--command="{"arguments":["mesos-executor","--launcher_dir=\/mesos\/mesos-1.1.0\/_build\/src"],"shell":false,"value":"\/mesos\/mesos-1.1.0\/_build\/src\/mesos-executor"}"
> --help="false" --pipe_read="78" --pipe_write="81" --pre_exec_commands="[]"
> --unshare_namespace_mnt="false" --user="mesos"
> --working_directory="/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/slaves/9c60155f-827f-467b-b942-140175e8e762-S0/frameworks/9c60155f-827f-467b-b942-140175e8e762-0000/executors/1/runs/adb5caa4-ef88-4f13-aefa-a4486c281385"'
> I0713 23:08:15.214721 29224 launcher.cpp:126] Forked child with pid '6348'
> for container 'adb5caa4-ef88-4f13-aefa-a4486c281385'
> I0713 23:08:15.441834 6381 exec.cpp:161] Version: 1.1.0
> I0713 23:08:15.448015 29225 slave.cpp:2902] Got registration for executor '1'
> of framework 9c60155f-827f-467b-b942-140175e8e762-0000 from
> executor(1)@172.17.0.1:43918
> I0713 23:08:15.451274 29225 slave.cpp:2079] Sending queued task '1' to
> executor '1' of framework 9c60155f-827f-467b-b942-140175e8e762-0000 at
> executor(1)@172.17.0.1:43918
> I0713 23:08:15.455325 6379 exec.cpp:236] Executor registered on agent
> 9c60155f-827f-467b-b942-140175e8e762-S0
> Received SUBSCRIBED event
> Subscribed executor on eb7794503d5d
> Received LAUNCH event
> Starting task 1
> /mesos/mesos-1.1.0/_build/src/mesos-containerizer launch
> --command="{"shell":true,"value":"sleep 1000"}" --help="false"
> --unshare_namespace_mnt="false"
> Forked command at 6392
> I0713 23:08:15.488458 29215 slave.cpp:3285] Handling status update
> TASK_RUNNING (UUID: 1af98a4a-0106-41ed-a944-6a7675d60c42) for task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 from
> executor(1)@172.17.0.1:43918
> I0713 23:08:15.491875 29225 status_update_manager.cpp:320] Received status
> update TASK_RUNNING (UUID: 1af98a4a-0106-41ed-a944-6a7675d60c42) for task 1
> of framework 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.491941 29225 status_update_manager.cpp:497] Creating
> StatusUpdate stream for task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.492642 29225 status_update_manager.cpp:374] Forwarding update
> TASK_RUNNING (UUID: 1af98a4a-0106-41ed-a944-6a7675d60c42) for task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 to the agent
> I0713 23:08:15.492969 29219 slave.cpp:3678] Forwarding the update
> TASK_RUNNING (UUID: 1af98a4a-0106-41ed-a944-6a7675d60c42) for task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 to [email protected]:60353
> I0713 23:08:15.493197 29219 slave.cpp:3572] Status update manager
> successfully handled status update TASK_RUNNING (UUID:
> 1af98a4a-0106-41ed-a944-6a7675d60c42) for task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.493249 29219 slave.cpp:3588] Sending acknowledgement for
> status update TASK_RUNNING (UUID: 1af98a4a-0106-41ed-a944-6a7675d60c42) for
> task 1 of framework 9c60155f-827f-467b-b942-140175e8e762-0000 to
> executor(1)@172.17.0.1:43918
> I0713 23:08:15.493793 29219 master.cpp:5273] Status update TASK_RUNNING
> (UUID: 1af98a4a-0106-41ed-a944-6a7675d60c42) for task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 from agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d)
> I0713 23:08:15.493860 29219 master.cpp:5321] Forwarding status update
> TASK_RUNNING (UUID: 1af98a4a-0106-41ed-a944-6a7675d60c42) for task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.494048 29219 master.cpp:6959] Updating the state of task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 (latest state:
> TASK_RUNNING, status update state: TASK_RUNNING)
> I0713 23:08:15.494376 29219 sched.cpp:1025] Scheduler::statusUpdate took
> 138557ns
> I0713 23:08:15.494781 29219 master.cpp:4388] Processing ACKNOWLEDGE call
> 1af98a4a-0106-41ed-a944-6a7675d60c42 for task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 (default) at
> [email protected]:60353 on agent
> 9c60155f-827f-467b-b942-140175e8e762-S0
> I0713 23:08:15.495860 29224 status_update_manager.cpp:392] Received status
> update acknowledgement (UUID: 1af98a4a-0106-41ed-a944-6a7675d60c42) for task
> 1 of framework 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.497241 29227 slave.cpp:2671] Status update manager
> successfully handled status update acknowledgement (UUID:
> 1af98a4a-0106-41ed-a944-6a7675d60c42) for task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.497524 29228 process.cpp:3354] Handling HTTP event for process
> 'slave(481)' with path: '/slave(481)/api/v1'
> I0713 23:08:15.499779 29218 http.cpp:270] HTTP POST for /slave(481)/api/v1
> from 172.17.0.1:43561
> I0713 23:08:15.499864 29218 http.cpp:346] Processing call GET_STATE
> I0713 23:08:15.510262 29218 master.cpp:4280] Telling agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d) to kill task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 (default) at
> [email protected]:60353
> I0713 23:08:15.510445 29218 slave.cpp:2109] Asked to kill task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> Received KILL event
> Received kill for task 1 with grace period of 3secs
> Sending SIGTERM to process tree at pid 6392
> Sent SIGTERM to the following process trees:
> [
> --- 6392 /mesos/mesos-1.1.0/_build/src/.libs/lt-mesos-containerizer launch
> --command={"shell":true,"value":"sleep 1000"} --help=false
> --unshare_namespace_mnt=false
> ]
> Scheduling escalation to SIGKILL in 3secs from now
> Command terminated with signal Terminated (pid: 6392)
> I0713 23:08:15.589100 29221 slave.cpp:3285] Handling status update
> TASK_KILLED (UUID: 97d934bc-4488-4254-bad8-fbf3cf150959) for task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 from
> executor(1)@172.17.0.1:43918
> I0713 23:08:15.591385 29214 slave.cpp:6088] Terminating task 1
> I0713 23:08:15.592869 29223 status_update_manager.cpp:320] Received status
> update TASK_KILLED (UUID: 97d934bc-4488-4254-bad8-fbf3cf150959) for task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.593243 29223 status_update_manager.cpp:374] Forwarding update
> TASK_KILLED (UUID: 97d934bc-4488-4254-bad8-fbf3cf150959) for task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 to the agent
> I0713 23:08:15.593865 29225 slave.cpp:3678] Forwarding the update TASK_KILLED
> (UUID: 97d934bc-4488-4254-bad8-fbf3cf150959) for task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 to [email protected]:60353
> I0713 23:08:15.594202 29225 slave.cpp:3572] Status update manager
> successfully handled status update TASK_KILLED (UUID:
> 97d934bc-4488-4254-bad8-fbf3cf150959) for task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.594650 29225 slave.cpp:3588] Sending acknowledgement for
> status update TASK_KILLED (UUID: 97d934bc-4488-4254-bad8-fbf3cf150959) for
> task 1 of framework 9c60155f-827f-467b-b942-140175e8e762-0000 to
> executor(1)@172.17.0.1:43918
> I0713 23:08:15.594491 29227 master.cpp:5273] Status update TASK_KILLED (UUID:
> 97d934bc-4488-4254-bad8-fbf3cf150959) for task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 from agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d)
> I0713 23:08:15.595213 29227 master.cpp:5321] Forwarding status update
> TASK_KILLED (UUID: 97d934bc-4488-4254-bad8-fbf3cf150959) for task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.595512 29227 master.cpp:6959] Updating the state of task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 (latest state:
> TASK_KILLED, status update state: TASK_KILLED)
> I0713 23:08:15.596700 29194 sched.cpp:1987] Asked to stop the driver
> I0713 23:08:15.595896 29224 sched.cpp:1025] Scheduler::statusUpdate took
> 161085ns
> I0713 23:08:15.596940 29224 sched.cpp:1032] Not sending status update
> acknowledgment message because the driver is not running!
> I0713 23:08:15.597113 29224 sched.cpp:1187] Stopping framework
> '9c60155f-827f-467b-b942-140175e8e762-0000'
> I0713 23:08:15.597548 29224 master.cpp:6410] Processing TEARDOWN call for
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 (default) at
> [email protected]:60353
> I0713 23:08:15.597699 29224 master.cpp:6422] Removing framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 (default) at
> [email protected]:60353
> I0713 23:08:15.598181 29217 hierarchical.cpp:924] Recovered cpus(*):2;
> mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2;
> mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 from framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.598775 29217 hierarchical.cpp:382] Deactivated framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.598598 29214 slave.cpp:2292] Asked to shut down framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 by [email protected]:60353
> I0713 23:08:15.599086 29214 slave.cpp:2317] Shutting down framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:15.599258 29214 slave.cpp:4481] Shutting down executor '1' of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 at
> executor(1)@172.17.0.1:43918
> I0713 23:08:15.598512 29224 master.cpp:6959] Updating the state of task 1 of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 (latest state:
> TASK_KILLED, status update state: TASK_KILLED)
> I0713 23:08:15.599740 29224 master.cpp:7025] Removing task 1 with resources
> cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 on agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d)
> I0713 23:08:15.600162 6389 exec.cpp:413] Executor asked to shutdown
> I0713 23:08:15.600735 29224 hierarchical.cpp:333] Removed framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:16.098399 29221 hierarchical.cpp:1537] No allocations performed
> I0713 23:08:16.098531 29221 hierarchical.cpp:1172] Performed allocation for 1
> agents in 316231ns
> I0713 23:08:16.594667 29226 slave.cpp:3806] executor(1)@172.17.0.1:43918
> exited
> I0713 23:08:16.681455 29228 containerizer.cpp:1863] Executor for container
> 'adb5caa4-ef88-4f13-aefa-a4486c281385' has exited
> I0713 23:08:16.681519 29228 containerizer.cpp:1622] Destroying container
> 'adb5caa4-ef88-4f13-aefa-a4486c281385'
> I0713 23:08:16.687269 29223 provisioner.cpp:411] Ignoring destroy request for
> unknown container adb5caa4-ef88-4f13-aefa-a4486c281385
> I0713 23:08:16.687805 29223 slave.cpp:4163] Executor '1' of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000 exited with status 0
> I0713 23:08:16.688426 29223 slave.cpp:4267] Cleaning up executor '1' of
> framework 9c60155f-827f-467b-b942-140175e8e762-0000 at
> executor(1)@172.17.0.1:43918
> I0713 23:08:16.688745 29224 gc.cpp:55] Scheduling
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/slaves/9c60155f-827f-467b-b942-140175e8e762-S0/frameworks/9c60155f-827f-467b-b942-140175e8e762-0000/executors/1/runs/adb5caa4-ef88-4f13-aefa-a4486c281385'
> for gc 6.9999920335763days in the future
> I0713 23:08:16.688899 29223 slave.cpp:4355] Cleaning up framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:16.688946 29224 gc.cpp:55] Scheduling
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/slaves/9c60155f-827f-467b-b942-140175e8e762-S0/frameworks/9c60155f-827f-467b-b942-140175e8e762-0000/executors/1'
> for gc 6.9999920335763days in the future
> I0713 23:08:16.689023 29214 status_update_manager.cpp:282] Closing status
> update streams for framework 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:16.689116 29214 status_update_manager.cpp:528] Cleaning up status
> update stream for task 1 of framework
> 9c60155f-827f-467b-b942-140175e8e762-0000
> I0713 23:08:16.689131 29224 gc.cpp:55] Scheduling
> '/tmp/ContentType_AgentAPITest_GetState_0_ZehY7n/slaves/9c60155f-827f-467b-b942-140175e8e762-S0/frameworks/9c60155f-827f-467b-b942-140175e8e762-0000'
> for gc 6.9999920335763days in the future
> I0713 23:08:16.701292 29220 process.cpp:3354] Handling HTTP event for process
> 'slave(481)' with path: '/slave(481)/api/v1'
> I0713 23:08:16.702886 29220 http.cpp:270] HTTP POST for /slave(481)/api/v1
> from 172.17.0.1:43562
> I0713 23:08:16.702958 29220 http.cpp:346] Processing call GET_STATE
> ../../src/tests/api_tests.cpp:3126: Failure
> Value of: getState.get_tasks().completed_tasks_size()
> Actual: 0
> Expected: 1u
> Which is: 1
> I0713 23:08:16.741088 29226 slave.cpp:841] Agent terminating
> I0713 23:08:16.741602 29226 master.cpp:1371] Agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d) disconnected
> I0713 23:08:16.741636 29226 master.cpp:2910] Disconnecting agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d)
> I0713 23:08:16.743294 29226 master.cpp:2929] Deactivating agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 at slave(481)@172.17.0.1:60353
> (eb7794503d5d)
> I0713 23:08:16.744006 29220 hierarchical.cpp:571] Agent
> 9c60155f-827f-467b-b942-140175e8e762-S0 deactivated
> I0713 23:08:16.761878 29194 master.cpp:1218] Master terminating
> I0713 23:08:16.762353 29213 hierarchical.cpp:510] Removed agent
> 9c60155f-827f-467b-b942-140175e8e762-S0
> [ FAILED ] ContentType/AgentAPITest.GetState/0, where GetParam() =
> application/x-protobuf (1686 ms)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)