Stefano,

Try inspecting the DNS directly, for example here is an nslookup query to
find the port and slave node that contains a running Docker container
started by Marathon, and then you can see the curl command touching on that
node and the port specified in the SRV record. I am not sure your
expectation of having test.marathon.mesos as a valid DNS A-record is
correct.

june@cluster:~$ nslookup -type=SRV _tomsflask._tcp.marathon.mesos

_tomsflask._tcp.marathon.mesos service = 0 0 31427 tomsflask-5p8ho-s83.
marathon.slave.mesos.

june@cluster:~$ curl http://tomsflask-5p8ho-s83.marathon.slave.mesos:31427

Hello World from Flask (default)


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Thu, Apr 14, 2016 at 2:33 AM, Stefano Bianchi <[email protected]>
wrote:

> Im correctly running mesos-dns from marathon and it seems to work.
> But when i launch:
>
> http://test.marathon.mesos
>
> (where test is a funning task on marathon)
>
> I get:
>
> curl: (7) Failed connect to test.marathon.mesos:80; Connection refused
>
> Where am i wrong?
> Il 13/apr/2016 17:46, "June Taylor" <[email protected]> ha scritto:
>
>> We are running pyspark against our cluster in coarse-grained mode by
>> specifying the --master mesos://host:5050 flag, which properly creates
>> one task on each node.
>>
>> However, if the driver is shut down, it appears that these executors
>> become orphaned_tasks, still consuming resources on the slave, but no
>> longer being represented in the master's understanding of available
>> resources.
>>
>> Examining the stdout/stderr shows it exited:
>>
>> Registered executor on node4
>> Starting task 0
>> sh -c 'cd spark-1*;  ./bin/spark-class
>> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://
>> [email protected]:41563 --executor-id
>> aa1337b6-43b0-4236-b445-c8ccbfb60506-S2/0 --hostname node4 --cores 31
>> --app-id aa1337b6-43b0-4236-b445-c8ccbfb60506-0097'
>> Forked command at 117620
>> Command exited with status 1 (pid: 117620)
>>
>> But, these executors are remaining on all the slaves.
>>
>> What can we do to clear them out? Stopping mesos-slave and removing the
>> full work-dir is successful, but also destroys our other tasks.
>>
>> Thanks,
>> June Taylor
>> System Administrator, Minnesota Population Center
>> University of Minnesota
>>
>

Reply via email to