: "l-bu128g5-10k10.ops.cn2.qunar.com",
> "id": "20151230-034049-3282655242-5050-1802-S7",
> "pid": "slave(1)@10.90.5.19:5051",
> "registered_time": 1452094227.39161,
> "reregistered_time": 1452831994.32924,
> "
Hi folks,
I meet a very strange issue when I migrated two nodes from one cluster to
another about one week ago.
Two nodes:
l-bu128g3-10k10.ops.cn2
l-bu128g5-10k10.ops.cn2
I did not clean the mesos data dir before they join the another cluster,
then I found the nodes live in two cluster at the
here is a hacking way to fix it in the current version. backup the
boot_id(it should exist in your $work_dir/meta/boot_id) file when mesos
agent(or slave) start, and restore it with the backup file when agent/slave
restart, slave id will not change. it works fine for ours cluster.
i hope it could
mespace of the host.
>
> On Mon, Nov 21, 2016 at 2:30 PM, haosdent <haosd...@gmail.com> wrote:
>
>> No sure if it related to this issue https://github.com/mesos
>> phere/docker-containers/issues/9
>>
>> On Mon, Nov 21, 2016 at 12:27 PM, X Brick <ngdoc...@gmail
Hi,
I meet a problem when running mesos-slave in the docker. Here are some
zombie process in this way.
```
root 10547 19464 0 Oct25 ?00:00:00 [docker]
root 14505 19464 0 Oct25 ?00:00:00 [docker]
root 16069 19464 0 Oct25 ?00:00:00 [docker]
root 19962
use rest api to set log level during runtime.
```
curl -v -X POST
http://slave_ip:5051/logging/toggle\?level\=1\\=15mins
```
### USAGE ###
>/logging/toggle
### TL;DR; ###
Sets the logging verbosity level for a specified duration.
### DESCRIPTION ###
The libprocess library uses
6 matches
Mail list logo