Thanks for the info Kevin.  Seems there's no JIRAs nor design docs floating
about yet for "admin tasks" or "daemon sets".

Just FYI, this is the ticket in Storm for the problem I've been mentioning:

https://issues.apache.org/jira/browse/STORM-1342

I'll update it with the info you've provided below, so for now we'll rely
on manually deploying logviewers.

Thanks!

- Erik

On Sat, Jul 1, 2017 at 10:09 AM Kevin Klues <[email protected]> wrote:

> What you are describing is a feature we call 'admin tasks' or 'daemon
> sets'.
>
> Unfortunately, there is no direct support for these yet, but we do have
> plans in the (relatively) near future to start working on it.
>
> One of our use cases is exactly what you describe with the logging
> service. On DC/OS we currently run our logging service as a systemd unit
> outside of mesos since we can't guarantee it gets launched everywhere (the
> same is true for a bunch of other services as well, namely metrics).
>
> We don't have an exact timeline for when we will build this support yet,
> but we will certainly announce it once we start actively working on it.
>
>
> Erik Weathers <[email protected]> schrieb am Sa. 1. Juli 2017 um
> 09:45:
>
>> That works for our particular use case, and is effectively what *we* do,
>> but renders storm a "strange bird" amongst mesos frameworks.  Is there no
>> trickery that could be played with mesos roles and/or reservations?
>>
>> - Erik
>>
>> On Sat, Jul 1, 2017 at 3:57 AM Dick Davies <[email protected]>
>> wrote:
>>
>>> If it _needs_ to be there always then I'd roll it out with whatever
>>> automation you use to deploy the mesos workers ; depending on
>>> the scale you're running at launching it as a task is likely to be less
>>> reliable due to outages etc.
>>>
>>> ( I understand the 'maybe all hosts' constraint but if it's 'up to one
>>> per
>>> host', it sounds like a CM issue to me. )
>>>
>>> On 30 June 2017 at 23:58, Erik Weathers <[email protected]> wrote:
>>> > hi Mesos folks!
>>> >
>>> > My team is largely responsible for maintaining the Storm-on-Mesos
>>> framework.
>>> > It suffers from a problem related to log retrieval:  Storm has a
>>> process
>>> > called the "logviewer" that is assumed to exist on every host, and the
>>> Storm
>>> > UI provides links to contact this process to download logs (and other
>>> > debugging artifacts).   Our team manually runs this process on each
>>> Mesos
>>> > host, but it would be nice to launch it automatically onto any Mesos
>>> host
>>> > where Storm work gets allocated. [0]
>>> >
>>> > I have read that Mesos has added support for Kubernetes-esque "pods"
>>> as of
>>> > version 1.1.0, but that feature seems somewhat insufficient for
>>> implementing
>>> > our desired behavior from my naive understanding.  Specifically, Storm
>>> only
>>> > has support for connecting to 1 logviewer per host, so unless pods can
>>> have
>>> > separate containers inside each pod [1], and also dynamically change
>>> the set
>>> > of executors and tasks inside of the pod [2], then I don't see how
>>> we'd be
>>> > able to use them.
>>> >
>>> > Is there any existing feature in Mesos that might help us accomplish
>>> our
>>> > goal?  Or any upcoming features?
>>> >
>>> > Thanks!!
>>> >
>>> > - Erik
>>> >
>>> > [0] Thus the "all" in quotes in the subject of this email, because it
>>> > *might* be all hosts, but it definitely would be all hosts where Storm
>>> gets
>>> > work assigned.
>>> >
>>> > [1] The Storm-on-Mesos framework leverages separate containers for each
>>> > topology's Supervisor and Worker processes, to provide isolation
>>> between
>>> > topologies.
>>> >
>>> > [2] The assignment of Storm Supervisors (a Mesos Executor) + Storm
>>> Workers
>>> > (a Mesos Task) onto hosts is ever changing in a given instance of a
>>> > Storm-on-Mesos framework.  i.e., as topologies get launched and die,
>>> or have
>>> > their worker processes die, the processes are dynamically distributed
>>> to the
>>> > various Mesos Worker hosts.  So existing containers often have more
>>> tasks
>>> > assigned into them (thus growing their footprint) or removed from them
>>> (thus
>>> > shrinking the footprint).
>>>
>> --
> ~Kevin
>

Reply via email to