shakeel

Dig command are perfectly working, i see the correct address on which i
have mesos-dns running
Unfortunately in the openstack environment where i am working there is not
a DNS.
And this miss it's causing me a lot a issues also for other stuff.

2016-04-14 15:22 GMT+02:00 shakeel <[email protected]>:

> Hi,
>
> Once you have mesos-dns running from marathon, test that it's working
> properly with dig.
>
> (You migth want to add you main dns servers as resolvers within the
> mesos-dns config and allow recursion.)
>
> Otherwise, configure you slaves to use the mesos-dns as their dns servers.
>
> I created a subdomain on my main dns server for the mesos-dns subdomain
> which points to the slave where mesos-dns is running.
>
> This way when you try to access the mesos-dns url from your browser, you
> main dns server will know where to forward the request.
>
> HTH.
>
> Kind Regards
> Shakeel Suffee
>
> On 14/04/16 14:06, Chris Baker wrote:
> > Also, make sure that the machine you're trying to launch from has
> > Mesos-DNS as its DNS server :)
> >
> > On Thu, Apr 14, 2016 at 3:33 AM Stefano Bianchi <[email protected]
> > <mailto:[email protected]>> wrote:
> >
> >     Im correctly running mesos-dns from marathon and it seems to work.
> >     But when i launch:
> >
> >     http://test.marathon.mesos
> >
> >     (where test is a funning task on marathon)
> >
> >     I get:
> >
> >     curl: (7) Failed connect to test.marathon.mesos:80; Connection
> refused
> >
> >     Where am i wrong?
> >
> >     Il 13/apr/2016 17:46, "June Taylor" <[email protected]
> >     <mailto:[email protected]>> ha scritto:
> >
> >         We are running pyspark against our cluster in coarse-grained
> >         mode by specifying the --master mesos://host:5050 flag, which
> >         properly creates one task on each node.
> >
> >         However, if the driver is shut down, it appears that these
> >         executors become orphaned_tasks, still consuming resources on
> >         the slave, but no longer being represented in the master's
> >         understanding of available resources.
> >
> >         Examining the stdout/stderr shows it exited:
> >
> >         Registered executor on node4
> >         Starting task 0
> >         sh -c 'cd spark-1*;  ./bin/spark-class
> >         org.apache.spark.executor.CoarseGrainedExecutorBackend
> >         --driver-url
> >         spark://[email protected]:41563
> >         <http://[email protected]:41563>
> >         --executor-id aa1337b6-43b0-4236-b445-c8ccbfb60506-S2/0
> >         --hostname node4 --cores 31 --app-id
> >         aa1337b6-43b0-4236-b445-c8ccbfb60506-0097'
> >         Forked command at 117620
> >         Command exited with status 1 (pid: 117620)
> >
> >         But, these executors are remaining on all the slaves.
> >
> >         What can we do to clear them out? Stopping mesos-slave and
> >         removing the full work-dir is successful, but also destroys our
> >         other tasks.
> >
> >         Thanks,
> >         June Taylor
> >         System Administrator, Minnesota Population Center
> >         University of Minnesota
> >
>
> --
> The information contained in this message is for the intended addressee
> only and may contain confidential and/or privileged information. If you are
> not the intended addressee, please delete this message and notify the
> sender; do not copy or distribute this message or disclose its contents to
> anyone. Any views or opinions expressed in this message are those of the
> author and do not necessarily represent those of Motortrak Limited or of
> any of its associated companies. No reliance may be placed on this message
> without written confirmation from an authorised representative of the
> company.
>
> Registered in England 3098391 V.A.T. Registered No. 667463890
>

Reply via email to