Here's the pull request: https://github.com/mesos/mesos-go/pull/258
________________________________ 发件人: Jörg Schad <[email protected]> 发送时间: 2016年8月22日 19:11:52 收件人: user 抄送: James DeFelice 主题: Re: 答复: 答复: mesos-go example scheduler doesn't work Thanks! Could you maybe send a PR with those changes? I also ccd James who is at least looking at mesos-go from time to time. Joerg On Mon, Aug 22, 2016 at 12:03 PM, 志昌 余 <[email protected]<mailto:[email protected]>> wrote: Sorry I didn't backup my log. I figured out that the root cause is my mesos slave supports docker only (passed "MESOS_CONTAINERIZERS=docker" as its env). However mesos-go:master doesn't support docker yet. And according to the README.md<https://github.com/mesos/mesos-go/blob/master/README.md> "won't see any major development". So I have to do that by myself. I changed mesos-go:master a bit and I'm able to run docker images with the example scheduler now. ________________________________ 发件人: haosdent <[email protected]<mailto:[email protected]>> 发送时间: 2016年8月22日 2:55:56 收件人: user 主题: Re: 答复: mesos-go example scheduler doesn't work Hi, could you show the associate logs in Mesos Agent? On Aug 19, 2016 5:49 PM, "志昌 余" <[email protected]<mailto:[email protected]>> wrote: I ran a scheduler with a cluster (3 nodes) of mesos masters. The "127.0,.0.1" in mesos log looks strange. (I guess mesos-go reported a wrong IP to mesos?) So I changed my env to use only one mesos master, and ensure scheduler run at the same machine with mesos master. This worked around the "Deactivated framework" problem. Then I get a different problem... all tasks failed: I0819 17:39:38.678068 17505 main.go:132] Received Offer < e39b3090-6d8a-4b1d-9a3e-defcfd9fa9c2-O62 > with cpus= 2 mem= 2928 I0819 17:39:38.678135 17505 main.go:164] Prepared task: go-task-63 with offer e39b3090-6d8a-4b1d-9a3e-defcfd9fa9c2-O62 for launch I0819 17:39:38.678209 17505 main.go:170] Launching 1 tasks for offer e39b3090-6d8a-4b1d-9a3e-defcfd9fa9c2-O62 E0819 17:39:38.691142 17505 main.go:205] executor "&ExecutorID{Value:*default,XXX_unrecognized:[],}" lost on slave "&SlaveID{Value:*0583e05a-2ddb-4db5-945f-00d1c98d3780-S2,XXX_unrecognized:[],}" code -1 I0819 17:39:38.691603 17505 main.go:176] Status update: task 63 is in state TASK_FAILED E0819 17:39:38.691662 17505 main.go:205] executor "&ExecutorID{Value:*default,XXX_unrecognized:[],}" lost on slave "&SlaveID{Value:*4f24314d-e38d-457c-82e3-0f5535315007-S1,XXX_unrecognized:[],}" code -1 E0819 17:39:38.692593 17505 main.go:205] executor "&ExecutorID{Value:*default,XXX_unrecognized:[],}" lost on slave "&SlaveID{Value:*d14a8781-f524-4531-bd16-2217506fa594-S0,XXX_unrecognized:[],}" code -1 I0819 17:39:38.694184 17505 main.go:176] Status update: task 62 is in state TASK_FAILED I0819 17:39:38.694818 17505 main.go:176] Status update: task 61 is in state TASK_FAILED ________________________________ 发件人: 志昌 余 <[email protected]<mailto:[email protected]>> 发送时间: 2016年8月19日 17:31:10 收件人: [email protected]<mailto:[email protected]> 主题: 答复: mesos-go example scheduler doesn't work I also tried per the README.md: [cannon@yzc-mesos1 examples]$ ./_output/scheduler -master=10.18.6.57:5050<http://10.18.6.57:5050> -executor="$EXECUTOR_BIN" -logtostderr=true I0819 17:10:39.068475 17278 main.go:215] Initializing the Example Scheduler... I0819 17:10:39.076339 17278 scheduler.go:334] Initializing mesos scheduler driver I0819 17:10:39.077385 17278 scheduler.go:833] Starting the scheduler driver... I0819 17:10:39.077616 17278 http_transporter.go:383] listening on 127.0.0.1 port 52671 I0819 17:10:39.078865 17278 scheduler.go:850] Mesos scheduler driver started with PID=scheduler(1)@127.0.0.1:52671<http://127.0.0.1:52671> I0819 17:10:39.080876 17278 scheduler.go:1053] Scheduler driver running. Waiting to be stopped. I0819 17:10:39.391125 17278 scheduler.go:419] New master [email protected]:5050<http://[email protected]:5050> detected I0819 17:10:39.391610 17278 scheduler.go:483] No credentials were provided. Attempting to register scheduler without authentication. ________________________________ 发件人: 志昌 余 <[email protected]<mailto:[email protected]>> 发送时间: 2016年8月19日 17:28 收件人: [email protected]<mailto:[email protected]> 主题: mesos-go example scheduler doesn't work Hi all, I'm trying https://github.com/mesos/mesos-go (master branch) with mesos-master:0.28.0-2.0.16.ubuntu1404 (run with docker). The scheduler doesn't run any tasks. Here's the output, and stuck there forever: [cannon@yzc-mesos1 scheduler]$ ./scheduler -master 10.18.6.57:5050<http://10.18.6.57:5050> -logtostderr=true I0819 16:50:11.850130 16619 main.go:215] Initializing the Example Scheduler... I0819 16:50:11.854875 16619 scheduler.go:334] Initializing mesos scheduler driver I0819 16:50:11.855194 16619 scheduler.go:833] Starting the scheduler driver... I0819 16:50:11.855344 16619 http_transporter.go:383] listening on 127.0.0.1 port 36443 I0819 16:50:11.855444 16619 scheduler.go:850] Mesos scheduler driver started with PID=scheduler(1)@127.0.0.1:36443<http://127.0.0.1:36443> I0819 16:50:11.855500 16619 scheduler.go:1053] Scheduler driver running. Waiting to be stopped. I0819 16:50:11.946344 16619 scheduler.go:419] New master [email protected]:5050<http://[email protected]:5050> detected I0819 16:50:11.946375 16619 scheduler.go:483] No credentials were provided. Attempting to register scheduler without authentication. The mesos master log indicates that it deactivated that scheduler again and again: I0819 17:25:41.866633 9 master.cpp:2231] Received SUBSCRIBE call for framework 'Test Framework (Go)' at scheduler(1)@127.0.0.1:52671<http://127.0.0.1:52671> I0819 17:25:41.866986 9 master.cpp:2302] Subscribing framework Test Framework (Go) with checkpointing disabled and capabilities [ ] E0819 17:25:41.869329 13 process.cpp:1958] Failed to shutdown socket with fd 15: Transport endpoint is not connected I0819 17:25:41.869443 7 hierarchical.cpp:265] Added framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 I0819 17:25:41.870447 9 master.cpp:1212] Framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 (Test Framework (Go)) at scheduler(1)@127.0.0.1:52671<http://127.0.0.1:52671> disconnected E0819 17:25:41.870484 13 process.cpp:1958] Failed to shutdown socket with fd 15: Transport endpoint is not connected I0819 17:25:41.870501 9 master.cpp:2527] Disconnecting framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 (Test Framework (Go)) at scheduler(1)@127.0.0.1:52671<http://127.0.0.1:52671> I0819 17:25:41.870671 9 master.cpp:2551] Deactivating framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 (Test Framework (Go)) at scheduler(1)@127.0.0.1:52671<http://127.0.0.1:52671> I0819 17:25:41.870734 9 master.cpp:1236] Giving framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 (Test Framework (Go)) at scheduler(1)@127.0.0.1:52671<http://127.0.0.1:52671> 0ns to failover W0819 17:25:41.870820 9 master.cpp:5189] Master returning resources offered to framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 because the framework has terminated or is inactive I0819 17:25:41.870942 9 hierarchical.cpp:375] Deactivated framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 I0819 17:25:41.871987 7 master.cpp:5176] Framework failover timeout, removing framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 (Test Framework (Go)) at scheduler(1)@127.0.0.1:52671<http://127.0.0.1:52671> I0819 17:25:41.872043 7 master.cpp:5909] Removing framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 (Test Framework (Go)) at scheduler(1)@127.0.0.1:52671<http://127.0.0.1:52671> I0819 17:25:41.872268 9 hierarchical.cpp:326] Removed framework 0583e05a-2ddb-4db5-945f-00d1c98d3780-0083 I0819 17:25:44.062274 7 http.cpp:312] HTTP GET for /master/state from 10.32.51.161:28655<http://10.32.51.161:28655> with User-Agent='Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36' I0819 17:25:51.068267 12 http.cpp:312] HTTP GET for /master/state from 10.32.51.161:28655<http://10.32.51.161:28655> with User-Agent='Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36' Chronos works well under my env. So what's wrong with mesos-go? Thanks, Zhichang Yu

