>>> Maybe it would be a good idea if Mesos was setting env variables named 
>>> AGENT_IP_0, AGENT_IP_1 and so on for every IP interface on the agent, maybe 
>>> AGENT_BIND_IP if bind IP is different than 0.0.0.0

That said, it’d be tricky to always be sure that IP_0 was the one you wanted. 
If the different interfaces are in distinct network subnets, then have you 
considered wrapping your framework (e.g if you’re relying on the ENTRYPOINT for 
the docker image) in a script that simply plucks out the right IP address by 
looking at the interfaces, and doing a simple grep for the right looking range?

We did some experiments the other week pertaining to your issue, to see if we 
could find a way of exposing the LIBPROCES_IP variable that the mesos agent 
provides to the executor (in this case, the docker-executor) with some fun env 
var hacks, but doesn’t look like any shell expansion happens long the way (for 
good reason, really) so we couldn’t find a way.

Given that you’re using host networking, i’d suggest trying to detect the right 
interface to bind to yourself, on the executor side, and set LIBPROCESS_IP= to 
the result of that logic before spawning the framework. Alternatively you could 
ensure the “public” bind interface of the agent is announced via a reverse PTR 
record (allowing you to do a simple `host $HOST`).

Not sure if this helps, little late to the thread. We essentially have the same 
problem when allowing user devices to connect to the cluster to run frameworks 
via a VPN, their machines have multiple IPs but only one is routable correctly 
from the cluster. A similar grep and LIBPROCESS_IP variable does the trick 
there.

> On 7 Jun 2016, at 15:43, Eli Jordan <[email protected]> wrote:
> 
> Currently I have it configured to use host networking
> 
> Thanks
> Eli
> 
> On 7 Jun 2016, at 11:25, Radoslaw Gruchalski <[email protected] 
> <mailto:[email protected]>> wrote:
> 
>> Yes, because that runs in host network. This leads to a question: your 
>> docker task, is it bridge or host network.
>> 
>> -- 
>> Best regards,
>> Rad
>> 
>> 
>> 
>> 
>> On Tue, Jun 7, 2016 at 3:21 AM +0200, "Eli Jordan" <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> It's important to note that if you run a task with the command executor 
>> (I.e. Not using docker) LIBPROCESS_IP is defined, along with several other 
>> variables that are not defined in docker.
>> 
>> Thanks
>> Eli
>> 
>> On 7 Jun 2016, at 10:05, Radoslaw Gruchalski <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>>> I think the problem is that it is not known which agent the task is running 
>>> on until the task i in the running state.
>>> Hence the master can’t pass that as an env variable to the task.
>>> However, I see your point. There is an agent host name avaialble in the 
>>> task as $HOST. Maybe it would be a good idea if Mesos was setting env 
>>> variables named AGENT_IP_0, AGENT_IP_1 and so on for every IP interface on 
>>> the agent, maybe AGENT_BIND_IP if bind IP is different than 0.0.0.0. OTOH, 
>>> I can see how this could be considered as some security issue. I am not 
>>> sure what the implications could be.
>>> 
>>> Anybody else care to comment?
>>> 
>>> – 
>>> Best regards,
>>> 
>>> Radek Gruchalski
>>> 
>>> [email protected] <mailto:[email protected]>
>>> de.linkedin.com/in/radgruchalski <http://de.linkedin.com/in/radgruchalski>
>>> 
>>> Confidentiality:
>>> This communication is intended for the above-named person and may be 
>>> confidential and/or legally privileged.
>>> If it has come to you in error you must take no action based on it, nor 
>>> must you copy or show it to anyone; please delete/destroy and inform the 
>>> sender immediately.
>>> 
>>> On June 7, 2016 at 1:42:46 AM, Eli Jordan ([email protected] 
>>> <mailto:[email protected]>) wrote:
>>> 
>>>> Thanks Radoslaw. I'm not really set on using host names, I just want a 
>>>> reliable way to start the framework. For the meantime I have gone with a 
>>>> solution similar to what you suggested. We use /etc/default/mesos file to 
>>>> configure mesos, and it has the ip defined, so I just mounted that in the 
>>>> container and read the ip.
>>>> 
>>>> I would like to avoid having a dependency on the file system of the  
>>>> agents though. I'm not sure why I can't have the docket executor set the 
>>>> LIBPROCESS_IP variable in the same way the command executor does.
>>>> 
>>>> Thanks
>>>> Eli
>>>> 
>>>> On 6 Jun 2016, at 21:44, Radoslaw Gruchalski <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>>> Out of curiosity. Why are you insisting on using host names?
>>>>> Say you have 1 master and 2 agents with these IPs:
>>>>> 
>>>>> - mesos-master-0: 10.100.1.10
>>>>> - mesos-agent-0: 10.100.1.11
>>>>> - mesos-agent-1: 10.100.1.12
>>>>> 
>>>>> Your problem is that you have no way to obtain an IP address of the agent 
>>>>> in the container. Correct?
>>>>> One way to overcome this problem is to create a shell file, say in 
>>>>> /etc/mesos-agent.sh, with contents like:
>>>>> 
>>>>> ...
>>>>> AGENT_IP=10.100.1.11
>>>>> …
>>>>> 
>>>>> If you’re using Marathon, you can copy that file to the sandbox using 
>>>>> docker volumes:
>>>>> 
>>>>>             {
>>>>>                 "containerPath": “/etc/mesos-agent.sh",
>>>>>                 "hostPath": "/etc/mesos-agent.sh",
>>>>>                 "mode": "RO"
>>>>>             }
>>>>> 
>>>>> You can now source that in the container to set the 
>>>>> LIBPROCESS_ADVERTISE_IP.
>>>>> Other applications simply use the mesos-agent-X host name. That’s without 
>>>>> mesos-dns.
>>>>> Things are easier with mesos-dns or consul service catalog (I prefer the 
>>>>> latter).
>>>>> 
>>>>> – 
>>>>> Best regards,
>>>>> 
>>>>> Radek Gruchalski
>>>>> 
>>>>> [email protected] <mailto:[email protected]>
>>>>> de.linkedin.com/in/radgruchalski <http://de.linkedin.com/in/radgruchalski>
>>>>> 
>>>>> Confidentiality:
>>>>> This communication is intended for the above-named person and may be 
>>>>> confidential and/or legally privileged.
>>>>> If it has come to you in error you must take no action based on it, nor 
>>>>> must you copy or show it to anyone; please delete/destroy and inform the 
>>>>> sender immediately.
>>>>> 
>>>>> On June 6, 2016 at 1:16:07 PM, Eli Jordan ([email protected] 
>>>>> <mailto:[email protected]>) wrote:
>>>>> 
>>>>>> The issue refers to LIBPROCESS_IP not LIBPROCESS_HOST. I haven’t been 
>>>>>> able to find the LIBPROCESS_HOST variable documented anywhere.
>>>>>> 
>>>>>> My understanding is that the scheduler uses LIBPROCESS_IP to determine 
>>>>>> which network interface to bind to, and also which ip to advertise to 
>>>>>> the master, so that the master can send offers. There is also another 
>>>>>> variable LIBPROCESS_ADVERTISE_IP. If this is defined then LIBPROCESS_IP 
>>>>>> is used to determine which network interface to bind to, and 
>>>>>> LIBPROCESS_ADVERTISE_IP is used to determine which ip to advertise to 
>>>>>> the master.
>>>>>> 
>>>>>> It would be great if there was a LIBPROCESS_ADVERTISE_HOST variable, 
>>>>>> then I could just use the $HOST variable to define this.
>>>>>> 
>>>>>>> On 5 Jun 2016, at 10:41 pm, Sivaram Kannan <[email protected] 
>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> I have been using this way from 0.23.0 to the 0.28.0. This has been 
>>>>>>> surely working (although for a different framework). Inside the docker 
>>>>>>> container can you see the $HOST variable defined??
>>>>>>> 
>>>>>>> The ticket you referred says that the apps definition needs to define 
>>>>>>> LIBPROCESS_HOST=$HOST to be make the framework take the proper IP - you 
>>>>>>> are describing a different problem.
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> ./Siva.
>>>>>>> 
>>>>>>> On Sun, Jun 5, 2016 at 4:30 AM, Eli Jordan <[email protected] 
>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>> I found this issue on the mesos jira that describes the exact issue I 
>>>>>>> am hitting.
>>>>>>> 
>>>>>>> https://issues.apache.org/jira/browse/MESOS-3740 
>>>>>>> <https://issues.apache.org/jira/browse/MESOS-3740>
>>>>>>> 
>>>>>>> It doesn't appear to be resolved. 
>>>>>>> 
>>>>>>> Thanks
>>>>>>> Eli
>>>>>>> 
>>>>>>> On 5 Jun 2016, at 16:46, Eli Jordan <[email protected] 
>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>> 
>>>>>>>> Hmmm… that doesn’t seem to work for me. What version of mesos does 
>>>>>>>> this work in? I am running 0.27.1.
>>>>>>>> 
>>>>>>>> When using this approach, I still get the following error when the 
>>>>>>>> kafka mess framework is starting up.
>>>>>>>> 
>>>>>>>> "Scheduler driver bound to loopback interface! Cannot communicate with 
>>>>>>>> remote master(s). You might want to set 'LIBPROCESS_IP' environment 
>>>>>>>> variable to use a routable IP address.”
>>>>>>>> 
>>>>>>>> I tried setting LIBPROCESS_IP to ‘0.0.0.0’ and 
>>>>>>>> LIBPROCESS_ADVERTISE_IP=‘the public ip’ and this works. But the host 
>>>>>>>> variations don’t seem to work. (i.e. set LIBPROCESS_IP=0.0.0.0 and 
>>>>>>>> LIBPROCESS_ADVERTISE_HOST=$HOST)
>>>>>>>> 
>>>>>>>> It seems lib process doesn’t support using host names.
>>>>>>>> 
>>>>>>>> I think I might have to run the framework outside of docker, but I 
>>>>>>>> would really like to avoid this. 
>>>>>>>> 
>>>>>>>> This problem would be solved if the docker executor was able to set 
>>>>>>>> the same environment variables as the command executor. Is there a way 
>>>>>>>> to make this happen?
>>>>>>>> 
>>>>>>>> I saw that mesos can be extended with a Hook ‘module’ to set extra 
>>>>>>>> environment variables in docker containers. This might be a solution, 
>>>>>>>> but seems over wrought for a simple problem.
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> On 5 Jun 2016, at 12:50 am, Sivaram Kannan <[email protected] 
>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Hi,
>>>>>>>>> 
>>>>>>>>> Can you try adding && after the LIBPROCESS_HOST variable and the 
>>>>>>>>> actual command. We have been using this for sometime now.
>>>>>>>>> 
>>>>>>>>> "cmd": "LIBPROCESS_HOST=$HOST && ./kafka-mesos.sh ..
>>>>>>>>> 
>>>>>>>>> Thanks,
>>>>>>>>> ./Siva.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Sat, Jun 4, 2016 at 8:34 AM, Eli Jordan <[email protected] 
>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>> Hi @haosdent
>>>>>>>>> 
>>>>>>>>> Based on my testing, this is not the case.
>>>>>>>>> 
>>>>>>>>> I ran a task (from marathon) without using a docker container that 
>>>>>>>>> just printed out all environment variables. i.e. while [ true ]; do 
>>>>>>>>> env; sleep 2; done
>>>>>>>>> 
>>>>>>>>> I then run a task that executed the same command inside an alpine 
>>>>>>>>> docker image.
>>>>>>>>> 
>>>>>>>>> When running without a docker image LIBPROCESS_IP was defined along 
>>>>>>>>> with many other variables. 
>>>>>>>>> 
>>>>>>>>> Sample output when running without docker (note LIBPROCESS_IP) is 
>>>>>>>>> defined
>>>>>>>>> 
>>>>>>>>> Registered executor on mesos-slave0
>>>>>>>>> Starting task plain-test.5e5b00cc-2645-11e6-a3dd-080027aa149e
>>>>>>>>> sh -c 'while [ true ]; do env; sleep 2; done'
>>>>>>>>> Forked command at 16571
>>>>>>>>> LIBPROCESS_IP=192.168.3.16
>>>>>>>>> MESOS_AGENT_ENDPOINT=192.168.3.16:5051 <http://192.168.3.16:5051/>
>>>>>>>>> MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
>>>>>>>>> HOST=mesos-slave0
>>>>>>>>> SHELL=/bin/sh
>>>>>>>>> MESOS_DIRECTORY=/var/mesos/slaves/7ad17efe-0f9e-4703-9d2e-7fb9ee03f64c-S0/frameworks/aae929c7-24a5-4463-9ae0-bc7b044973c5-0000/executors/plain-test.5e5b00cc-2645-11e6-a3dd-080027aa149e/runs/c9b6ef86-b37d-4e3c-b1ca-bd680aed779f
>>>>>>>>> PORT0=31082
>>>>>>>>> PORT_10001=31082
>>>>>>>>> LC_ALL=en_US.UTF-8
>>>>>>>>> … more
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Sample output when running with docker (note LIBPROCESS_IP is not 
>>>>>>>>> defined)
>>>>>>>>> 
>>>>>>>>> --container="mesos-7ad17efe-0f9e-4703-9d2e-7fb9ee03f64c-S0.f3a94ab4-dfff-4e97-b806-f1cc501ecf42"
>>>>>>>>>  --docker="docker" --docker_socket="/var/run/docker.sock" 
>>>>>>>>> --help="false" --initialize_driver_logging="true" 
>>>>>>>>> --launcher_dir="/usr/libexec/mesos" --logbufsecs="0" 
>>>>>>>>> --logging_level="INFO" --mapped_directory="/mnt/mesos/sandbox" 
>>>>>>>>> --quiet="false" 
>>>>>>>>> --sandbox_directory="/var/mesos/slaves/7ad17efe-0f9e-4703-9d2e-7fb9ee03f64c-S0/frameworks/aae929c7-24a5-4463-9ae0-bc7b044973c5-0000/executors/alpine-test.77d5a3d9-2644-11e6-a3dd-080027aa149e/runs/f3a94ab4-dfff-4e97-b806-f1cc501ecf42"
>>>>>>>>>  --stop_timeout="0ns"
>>>>>>>>> --container="mesos-7ad17efe-0f9e-4703-9d2e-7fb9ee03f64c-S0.f3a94ab4-dfff-4e97-b806-f1cc501ecf42"
>>>>>>>>>  --docker="docker" --docker_socket="/var/run/docker.sock" 
>>>>>>>>> --help="false" --initialize_driver_logging="true" 
>>>>>>>>> --launcher_dir="/usr/libexec/mesos" --logbufsecs="0" 
>>>>>>>>> --logging_level="INFO" --mapped_directory="/mnt/mesos/sandbox" 
>>>>>>>>> --quiet="false" 
>>>>>>>>> --sandbox_directory="/var/mesos/slaves/7ad17efe-0f9e-4703-9d2e-7fb9ee03f64c-S0/frameworks/aae929c7-24a5-4463-9ae0-bc7b044973c5-0000/executors/alpine-test.77d5a3d9-2644-11e6-a3dd-080027aa149e/runs/f3a94ab4-dfff-4e97-b806-f1cc501ecf42"
>>>>>>>>>  --stop_timeout="0ns"
>>>>>>>>> Registered docker executor on mesos-slave0
>>>>>>>>> Starting task alpine-test.77d5a3d9-2644-11e6-a3dd-080027aa149e
>>>>>>>>> HOSTNAME=984809b0b720
>>>>>>>>> SHLVL=1
>>>>>>>>> HOME=/root
>>>>>>>>> PORT=31295
>>>>>>>>> MESOS_CONTAINER_NAME=mesos-7ad17efe-0f9e-4703-9d2e-7fb9ee03f64c-S0.f3a94ab4-dfff-4e97-b806-f1cc501ecf42
>>>>>>>>> MARATHON_APP_ID=/alpine-test
>>>>>>>>> PORTS=31295
>>>>>>>>> PORT0=31295
>>>>>>>>> MARATHON_APP_DOCKER_IMAGE=alpine
>>>>>>>>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
>>>>>>>>> MESOS_SANDBOX=/mnt/mesos/sandbox
>>>>>>>>> MARATHON_APP_RESOURCE_DISK=0.0
>>>>>>>>> MARATHON_APP_RESOURCE_MEM=128.0
>>>>>>>>> HOST=mesos-slave0
>>>>>>>>> PORT_10000=31295
>>>>>>>>> MARATHON_APP_VERSION=2016-05-30T08:56:59.065Z
>>>>>>>>> MARATHON_APP_LABELS=
>>>>>>>>> PWD=/
>>>>>>>>> MESOS_TASK_ID=alpine-test.77d5a3d9-2644-11e6-a3dd-080027aa149e
>>>>>>>>> MARATHON_APP_RESOURCE_CPUS=1.0
>>>>>>>>> 
>>>>>>>>> Is there some other config I need to do for the docker containerizer? 
>>>>>>>>> Any help greatly appreciated.
>>>>>>>>> 
>>>>>>>>>> On 4 Jun 2016, at 7:28 pm, haosdent <[email protected] 
>>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>>> 
>>>>>>>>>> Hi, @Jordan. I think not matter you use MesosContainerizer or 
>>>>>>>>>> DockerContainerizer, LIBPROCESS_IP always would be set if you launch 
>>>>>>>>>> you Agent with `--ip` flag.
>>>>>>>>>> 
>>>>>>>>>> On Fri, Jun 3, 2016 at 8:23 PM, Eli Jordan <[email protected] 
>>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>>> 
>>>>>>>>>> The reason I need to set LIBPROCESS_IP is because the slaves have 2 
>>>>>>>>>> network interfaces, and the docker container is running in host 
>>>>>>>>>> networking mode. So libmesos doesn’t know which IP to advertise. The 
>>>>>>>>>> hostnames of the slaves are all resolvable.
>>>>>>>>>> 
>>>>>>>>>> I have noticed that if I run a marathon app that doesn’t use docker, 
>>>>>>>>>> e.g. while [ true ]; do env; sleep 2; done, that LIBPROCESS_IP is 
>>>>>>>>>> defined in the environment. However, when running a docker image 
>>>>>>>>>> this variable is not defined. 
>>>>>>>>>> 
>>>>>>>>>> Is there a way to have marathon pass along all environment variables 
>>>>>>>>>> defined by mesos?
>>>>>>>>>> Thanks
>>>>>>>>>> Eli
>>>>>>>>>> 
>>>>>>>>>> On 4 Apr 2016, at 14:12, Eli Jordan <[email protected] 
>>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>>> 
>>>>>>>>>>> @haosdent I’m not sure how this works internally, but it seems the 
>>>>>>>>>>> mess master needs to send requests to the framework for resource 
>>>>>>>>>>> offers, and therefore needs to know the external IP (i.e. the host 
>>>>>>>>>>> IP)
>>>>>>>>>>> 
>>>>>>>>>>> @craig w
>>>>>>>>>>> Would I need to do this in the cmd portion of the marathon JSON? I 
>>>>>>>>>>> currently have the config below
>>>>>>>>>>> {
>>>>>>>>>>>     "container": ...,
>>>>>>>>>>>     "id":"kafka-mesos-scheduler",
>>>>>>>>>>>     "cpus": 0.5,
>>>>>>>>>>>     "mem": 256,
>>>>>>>>>>>     "ports": [9999],
>>>>>>>>>>>     "cmd": "./kafka-mesos.sh scheduler --master=mesos-master:5050 
>>>>>>>>>>> --zk=mesos-master:2181 --api=http://mesos-slave0:9999 
>>>>>>>>>>> <http://mesos-slave0:9999/> --storage=zk:/kafka-mesos",
>>>>>>>>>>>     "instances": 1,
>>>>>>>>>>>     "constraints": [["hostname", "LIKE", "mesos-slave0"]],
>>>>>>>>>>>     "env": {
>>>>>>>>>>>         "LIBPROCESS_IP": "192.168.3.16"
>>>>>>>>>>>     }
>>>>>>>>>>> }
>>>>>>>>>>> 
>>>>>>>>>>> @Chris Baker Currently we don’t have mess-dns setup but if this 
>>>>>>>>>>> works it would seem to be a nice solution. However, I did try 
>>>>>>>>>>> setting LIBPROCESS_IP to the slave hostname and it seems to produce 
>>>>>>>>>>> a parse error. So I think it needs to be an actual IP address.
>>>>>>>>>>> 
>>>>>>>>>>> I was hoping there would be a configuration for the slave that 
>>>>>>>>>>> would automatically populate this env variable when starting the 
>>>>>>>>>>> docker container. So I don’t need to complicate the marathon file, 
>>>>>>>>>>> and can reuse it in different clusters.
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>>> On 4 Apr 2016, at 11:25 am, Chris Baker <[email protected] 
>>>>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> Alternatively, because the $HOST username is indirect, which would 
>>>>>>>>>>>> require a runtime element to "export $LIBPROCESS_IP=$HOST", 
>>>>>>>>>>>> another alternative is to fallback on Mesos-DNS, if that's part of 
>>>>>>>>>>>> the cluster deployment, setting $LIBPROCESS_IP to the (a priori) 
>>>>>>>>>>>> Mesos-DNS entry corresponding to the service.
>>>>>>>>>>>> 
>>>>>>>>>>>> On Sun, Apr 3, 2016 at 5:06 PM craig w <[email protected] 
>>>>>>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>>>>>>> Hi, marathon sets the HOST env var. If it's not the ip address you 
>>>>>>>>>>>> can use getent with the value from HOST to figure it out.
>>>>>>>>>>>> 
>>>>>>>>>>>> >However, in order for the frameworks to receive resource offers I 
>>>>>>>>>>>> >need to set the LIBPROCESS_IP environment variable to the hosts 
>>>>>>>>>>>> >IP address for the docker container running the frameworks. 
>>>>>>>>>>>> 
>>>>>>>>>>>> Hi, @Gmail. Could you provide more details about this?
>>>>>>>>>>>> 
>>>>>>>>>>>> On Sun, Apr 3, 2016 at 10:40 PM, Rad Gruchalski 
>>>>>>>>>>>> <[email protected] <mailto:[email protected]>> wrote:
>>>>>>>>>>>> Hi Gmail,
>>>>>>>>>>>> 
>>>>>>>>>>>> AFAIK not. The only way to do so is setting up the env variable as 
>>>>>>>>>>>> you do now.
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Kind regards,
>>>>>>>>>>>> 
>>>>>>>>>>>> Radek Gruchalski
>>>>>>>>>>>> 
>>>>>>>>>>>> [email protected] <mailto:[email protected]>

>>>>>>>>>>>> >>>>>>>>>>>>  <mailto:[email protected]>
>>>>>>>>>>>> de.linkedin.com/in/radgruchalski/ 
>>>>>>>>>>>> <http://de.linkedin.com/in/radgruchalski/>
>>>>>>>>>>>> 
>>>>>>>>>>>> Confidentiality:
>>>>>>>>>>>> This communication is intended for the above-named person and may 
>>>>>>>>>>>> be confidential and/or legally privileged.
>>>>>>>>>>>> If it has come to you in error you must take no action based on 
>>>>>>>>>>>> it, nor must you copy or show it to anyone; please delete/destroy 
>>>>>>>>>>>> and inform the sender immediately.
>>>>>>>>>>>> On Sunday, 3 April 2016 at 16:09, Gmail wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> I'm pretty new to mesos and marathon, and I'm running a couple of 
>>>>>>>>>>>>> frameworks with marathon (Kafka and elastic search). However, in 
>>>>>>>>>>>>> order for the frameworks to receive resource offers I need to set 
>>>>>>>>>>>>> the LIBPROCESS_IP environment variable to the hosts IP address 
>>>>>>>>>>>>> for the docker container running the frameworks. Currently I am 
>>>>>>>>>>>>> working around me this by using a constraint to hard wire the 
>>>>>>>>>>>>> slave that the framework gets launched on, so then I can put the 
>>>>>>>>>>>>> slaves ip in the marathon json file.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Obviously this is not ideal. Is there a better way to define the 
>>>>>>>>>>>>> host ip Inside the docker container?
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sent from my iPad
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> --
>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>> Haosdent Huang
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> --
>>>>>>>>>> Best Regards,
>>>>>>>>>> Haosdent Huang
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> --
>>>>>>>>> ever tried. ever failed. no matter.
>>>>>>>>> try again. fail again. fail better.
>>>>>>>>>         -- Samuel Beckett
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> ever tried. ever failed. no matter.
>>>>>>> try again. fail again. fail better.
>>>>>>>         -- Samuel Beckett

Reply via email to