Hmmm… that doesn’t seem to work for me. What version of mesos does this work
in? I am running 0.27.1.
When using this approach, I still get the following error when the kafka mess
framework is starting up.
"Scheduler driver bound to loopback interface! Cannot communicate with remote
master(s).
My 2cents - Is there a possibility of old data in /var/lib/mesos - can you
try deleting the folder /var/lib/mesos in all the 3 systems and try
bringing it up??
On Sat, Jun 4, 2016 at 9:04 PM, Qian Zhang wrote:
> I am using the latest Mesos code in git (master branch). However, I also
> tried the
I am using the latest Mesos code in git (master branch). However, I also
tried the official 0.28.1 release, but no lock too.
Thanks,
Qian Zhang
On Sun, Jun 5, 2016 at 8:04 AM, Jie Yu wrote:
> Which version are you using?
>
> - Jie
>
> On Sat, Jun 4, 2016 at 4:34 PM, Qian Zhang wrote:
>
> > Th
Which version are you using?
- Jie
On Sat, Jun 4, 2016 at 4:34 PM, Qian Zhang wrote:
> Thanks Vinod and Dick.
>
> I think my 3 ZK servers have formed a quorum, each of them has the
> following config:
> $ cat conf/zoo.cfg
> server.1=192.168.122.132:2888:3888
> server.2=192.168.122.2
Thanks Vinod and Dick.
I think my 3 ZK servers have formed a quorum, each of them has the
following config:
$ cat conf/zoo.cfg
server.1=192.168.122.132:2888:3888
server.2=192.168.122.225:2888:3888
server.3=192.168.122.171:2888:3888
autopurge.purgeInterval=6
autopurge.snapRe
Hi Rinaldo,
MacOS X has a variety of mechanisms designed to improve energy efficiency, and
many of these impact timer behavior. I suspect that this is what is affecting
you. There is a whitepaper here, which has more details
https://www.apple.com/media/us/osx/2013/docs/OSX_Power_Efficiency_
You told the master it needed a quorum of 2 and it's the only one
online, so it's bombing out.
That's the expected behaviour.
You need to start at least 2 zookeepers before it will be a functional
group, same for the masters.
You haven't mentioned how you setup your zookeeper cluster, so i'm
assu
Hi all,
The vote for Mesos 0.28.2 (rc1) has passed with the
following votes.
+1 (Binding)
--
Vinod Kone
Till Toenshoff
Kapil Arya
+1 (Non-binding)
--
Guangya Liu
haosdent
There were no 0 or -1 votes.
Please find the release at:
https://di
You need to start all 3 masters simultaneously so that they can reach a
quorum. Also, looks like each master is talking to its local zk server, are
you sure the 3 ZK servers are forming a quorum?
On Sat, Jun 4, 2016 at 9:42 AM, Qian Zhang wrote:
> Hi Folks,
>
> I am trying to set up a Mesos HA e
Hi Folks,
I am trying to set up a Mesos HA env with 3 nodes, each of nodes has a
Zookeeper running, so they form a Zookeeper cluster. And then when I
started the first Mesos master in one node with:
sudo ./bin/mesos-master.sh --zk=zk://127.0.0.1:2181/mesos --quorum=2
--work_dir=/var/lib/mesos/
Hi,
Can you try adding && after the LIBPROCESS_HOST variable and the actual
command. We have been using this for sometime now.
"cmd": "LIBPROCESS_HOST=$HOST && ./kafka-mesos.sh ..
Thanks,
./Siva.
On Sat, Jun 4, 2016 at 8:34 AM, Eli Jordan wrote:
> Hi @haosdent
>
> Based on my testing, this i
Hi @haosdent
Based on my testing, this is not the case.
I ran a task (from marathon) without using a docker container that just printed
out all environment variables. i.e. while [ true ]; do env; sleep 2; done
I then run a task that executed the same command inside an alpine docker image.
When
Hi, @Jordan. I think not matter you use MesosContainerizer or
DockerContainerizer, LIBPROCESS_IP always would be set if you launch you
Agent with `--ip` flag.
On Fri, Jun 3, 2016 at 8:23 PM, Eli Jordan wrote:
>
> The reason I need to set LIBPROCESS_IP is because the slaves have 2
> network inter
Hi, Rinaldo. I test your problem in my local Mesos (run in my mac). It
looks normal in my side. I started it by
```
mesos-execute --master="localhost:5050" --name="test-sleep" --command="cd
/tmp && java SleepLatency"
```
```
Registered executor on localhost
Starting task test-sleep
sh -c 'cd /tmp
14 matches
Mail list logo