Hi, @Srikant
Hi, @Srikant
Usually, your task should be killed when over cgroup limit. Would you enter
the `/sys/fs/cgroup/memory/mesos` folder in the agent?
Then check the values in `${YOUR_CONTAINER_ID}/memory.limit_in_bytes`,
`${YOUR_CONTAINER_ID}/memory.soft_limit_in_bytes` and
Thanks Haosdent for your quick response.
I added GLOG_v=1 to the master and agents.
1. The framework is registered. Marathon in this case.
2. I see messages 'Telling agent (...) to kill task (...)'. Why does
this happen? I also see 'Sending explicit reconciliation state
TASK_LOST for task
thanks!
On 04.10.2016 11:14, haosdent wrote:
hi, @Hendrik You could specific the --attribute flag when starting
mesos agent. For example, use --attributes=docker:false. Then you
could get it in the `Offer` in your framework. Another way is query
the /flags endpoint of the agent in your
staging is the initialize status of the task. I think you may your logs via
these steps:
1. If your framework registered successfully in the master?
2. If the master send resources offers to your framework and your framework
accept it?
3. If your agents receive the RunTaskMessage from master to
Hi all,
I am looking for some ways to troubleshoot or debug tasks that are
stuck in the 'staging' state. Typically they have no logs in the
sandbox.
Are there are any endpoints or things to look for in logs to identify
a root cause?
Is there a troubleshooting guide for Mesos to solve problems
5 matches
Mail list logo