If it _needs_ to be there always then I'd roll it out with whatever
automation you use to deploy the mesos workers ; depending on
the scale you're running at launching it as a task is likely to be less
reliable due to outages etc.

( I understand the 'maybe all hosts' constraint but if it's 'up to one per
host', it sounds like a CM issue to me. )

On 30 June 2017 at 23:58, Erik Weathers <[email protected]> wrote:
> hi Mesos folks!
>
> My team is largely responsible for maintaining the Storm-on-Mesos framework.
> It suffers from a problem related to log retrieval:  Storm has a process
> called the "logviewer" that is assumed to exist on every host, and the Storm
> UI provides links to contact this process to download logs (and other
> debugging artifacts).   Our team manually runs this process on each Mesos
> host, but it would be nice to launch it automatically onto any Mesos host
> where Storm work gets allocated. [0]
>
> I have read that Mesos has added support for Kubernetes-esque "pods" as of
> version 1.1.0, but that feature seems somewhat insufficient for implementing
> our desired behavior from my naive understanding.  Specifically, Storm only
> has support for connecting to 1 logviewer per host, so unless pods can have
> separate containers inside each pod [1], and also dynamically change the set
> of executors and tasks inside of the pod [2], then I don't see how we'd be
> able to use them.
>
> Is there any existing feature in Mesos that might help us accomplish our
> goal?  Or any upcoming features?
>
> Thanks!!
>
> - Erik
>
> [0] Thus the "all" in quotes in the subject of this email, because it
> *might* be all hosts, but it definitely would be all hosts where Storm gets
> work assigned.
>
> [1] The Storm-on-Mesos framework leverages separate containers for each
> topology's Supervisor and Worker processes, to provide isolation between
> topologies.
>
> [2] The assignment of Storm Supervisors (a Mesos Executor) + Storm Workers
> (a Mesos Task) onto hosts is ever changing in a given instance of a
> Storm-on-Mesos framework.  i.e., as topologies get launched and die, or have
> their worker processes die, the processes are dynamically distributed to the
> various Mesos Worker hosts.  So existing containers often have more tasks
> assigned into them (thus growing their footprint) or removed from them (thus
> shrinking the footprint).

Reply via email to