That's certainly not an issue.
Every container is running on same host, with --net=host flag. Master is
actually reachable on 127.0.0.1.

That's the command I'm running master with:
docker run --name mesos-master --net=host gregory90/mesos-master
--ip=127.0.0.1 --zk=zk://localhost:2181/mesos --work_dir=/var/lib/mesos
--quorum=1 --log_dir=/var/log

mesos-master and mesos-slave container as well as zookeeper can reach each
other on localhost.

To be sure, I've created 2-node CoreOS cluster running
mesos-master+zookeeper on one node and mesos-slave on the second node - the
same problem appears.

On 23 September 2014 12:53, Dick Davies <[email protected]> wrote:

> The master is advertising itself as being on 127.0.0.1  - try running
> it with an --ip flag.
>
>
> On 23 September 2014 11:10, Grzegorz Graczyk <[email protected]> wrote:
> > Thanks for your response!
> >
> > Mounting /sys did the job, cgroups are working, but now mesos-slave is
> just
> > crushing after detecting new master or so (there's nothing useful in the
> > logs - is there a way to make them more verbose?)
> >
> > Last lines of logs from mesos-slave:
> > I0923 10:03:24.079859 10 detector.cpp:426] A new leading master
> > ([email protected]:5050) is detected
> > I0923 10:03:26.076053     9 slave.cpp:3195] Finished recovery
> > I0923 10:03:26.076505     9 slave.cpp:589] New master detected at
> > [email protected]:5050
> > I0923 10:03:26.076732     9 slave.cpp:625] No credentials provided.
> > Attempting to register without authentication
> > I0923 10:03:26.076812     9 slave.cpp:636] Detecting new master
> > I0923 10:03:26.076864     9 status_update_manager.cpp:167] New master
> > detected at [email protected]:5050
> >
> > There's no problem in running mesos-master in the container(at least
> there
> > wasn't any in my case, for now)
> >
> >
> >
> >
> > On 23 September 2014 09:41, Tim Chen <[email protected]> wrote:
> >>
> >> Hi Grzegorz,
> >>
> >> To run Mesos master|slave in a docker container is not straight forward
> >> because we utilize kernel features therefore you need to explicitly
> test out
> >> the features you like to use with Mesos with slave/master in Docker.
> >>
> >> Gabriel during the Mesosphere hackathon has got master and slave running
> >> in docker containers, and he can probably share his Dockerfile and run
> >> command.
> >>
> >> I believe one work around to get cgroups working with Docker run is to
> >> mount /sys into the container (mount -v /sys:/sys).
> >>
> >> Gabriel do you still have the command you used to run slave/master with
> >> Docker?
> >>
> >> Tim
> >>
> >>
> >>
> >> On Tue, Sep 23, 2014 at 12:24 AM, Grzegorz Graczyk <[email protected]
> >
> >> wrote:
> >>>
> >>> I'm trying to run mesos-slave inside Docker container, but it can't
> start
> >>> due to problem with mounting cgroups.
> >>>
> >>> I'm using:
> >>> Kernel Version: 3.13.0-32-generic
> >>> Operating System: Ubuntu 14.04.1 LTS
> >>> Docker: 1.2.0(commit fa7b24f)
> >>> Mesos: 0.20.0
> >>>
> >>> Following error appears:
> >>> I0923 07:11:20.921475    19 main.cpp:126] Build: 2014-08-22 05:04:26 by
> >>> root
> >>> I0923 07:11:20.921608    19 main.cpp:128] Version: 0.20.0
> >>> I0923 07:11:20.921620    19 main.cpp:131] Git tag: 0.20.0
> >>> I0923 07:11:20.921628    19 main.cpp:135] Git SHA:
> >>> f421ffdf8d32a8834b3a6ee483b5b59f65956497
> >>> Failed to create a containerizer: Could not create DockerContainerizer:
> >>> Failed to find a mounted cgroups hierarchy for the 'cpu' subsystem; you
> >>> probably need to mount cgroups manually!
> >>>
> >>> I'm running docker container with command:
> >>> docker run --name mesos-slave --privileged --net=host -v
> >>> /var/run/docker.sock:/var/run/docker.sock -v
> /var/lib/docker:/var/lib/docker
> >>> -v /usr/local/bin/docker:/usr/local/bin/docker gregory90/mesos-slave
> >>> --containerizers=docker,mesos --master=zk://localhost:2181/mesos
> >>> --ip=127.0.0.1
> >>>
> >>> Everything is running on single machine.
> >>> Everything works as expected when mesos-slave is run outside docker
> >>> container.
> >>>
> >>> I'd appreciate some help.
> >>
> >>
> >
>

Reply via email to