On Thu, Nov 2, 2017 at 5:05 PM, Ivan Necas <ine...@redhat.com> wrote:

> I lean towards the push model here. The main reason is
> the simpler way to publish the instrumentation data from whatever
> process we want to track. Also, my understanding is, that we don't care
> only if the service is up or down (readiness and liveness) but also
> about trends during the processing.
>

In reading about push vs. pull, my biggest issue with it is that the
application has to have knowledge of where it's pushing. Whereas the pull
model allows an application to say I have metrics here and anything that
knows how to scrape and interpret those metrics can grab them at their
leisure. This provides nicer de-coupling and potentially more choice if
there is a standard-ish data format used to expose the metrics.


>
> Eric: could you more describe the 5 web applications requiring 5
> monitoring containers?
> I might be missing where this implication came from?
>

This part comes from if you do the sidecar method of running something like
a statsd process. A sidecar is essentially just running two containers in a
pod where one container is considered the main application and the other an
addon that provides some additional value or performs some actions without
being baked into your main application container. If you picture this idea,
and then think about pod scaling (in Kube), for every scale up you'd also
be adding another statsd process container. This might be fine in practice,
but could in theory be overkill.

Another method with statsd is simply to run it on the host itself and have
the containers send data to it but this provides some security concerns
from what I understand.

The biggest limiting factor appears to be how forking webservers are
handled and probably constraints us the most. Lukas, have you seen anything
related to being to define what the metrics are and how they get published
being able to be separated from the publishing mechanism? My thinking being
if we started with statsd and wrote code within the application generating
statsd metrics, if at a later point one could simply say now publish this
via HTTP endpoint in Prometheus data style for scraping?

Eric


>
> -- Ivan
>
> On Wed, Nov 1, 2017 at 4:54 PM, Lukas Zapletal <l...@redhat.com> wrote:
> >> Does Prometheus only not work in a multi-process Rails web server? Does
> it
> >> work for a single process multi-threaded web server? This is an
> interesting
> >> roadblock given you'd expect this to affect lots of webserver across
> >> multiple languages out there.
> >
> > Any Rails app that has multiple processes needs currently to figure
> > out how to deliver data to the HTTP endpoint. E.g. store it in a
> > database or something, which is not the best approach.
> >
> > Absolutely, it lacks quite important feature right there. It stems
> > from the design which is pull-based.
> >
> >> Yes, standard practice is to think about one container per pod (in a
> >> Kubernetes environment). However, there are patterns for things like log
> >> aggregation and monitoring such as doing a sidecar container that
> ensures
> >> co-location. The part I don't entirely get with sidecars is if I scale
> the
> >> pod to say 5, I get 5 web applications and 5 monitoring containers and
> that
> >> seems odd. Which I why I think the tendency is towards models where your
> >> single process/application is the end point for your metrics to be
> scrapped
> >> by an outside agent or services.
> >>
> >> I agree you want the collector to be separate, but if your web
> application
> >> is down what value would a monitoring endpoint being alive provide? The
> >> application would be down, thus no metrics to serve up. The other
> exporters
> >> such as the one exporting metrics about the underlying system would be
> >> responsible for giving system metrics. In the Kube world, this is
> handled by
> >> readiness and liveness probes for Kubenernetes to re-spin the container
> if
> >> it stops responding.
> >
> > In container world, monitoring agents are running on hosts, not in
> > containers themselves. And collector agents can be 1:1 or 1:N (e.g.
> > for each container host). I am not sure I follow you here. Why you
> > don't see added value again? Monitoring agent without any apps
> > connected is as useful as ssh deamon waiting for connections.
> >
> > Let me put it this way - push approach seems to be more appropriate
> > for multi process Ruby application than pull approach. That's what we
> > are discussing here, unless there are better protocols/agents I am not
> > aware of.
> >
> > Honestly, pull approach via simple HTTP REST API seems cleaner but it
> > is just not good fit and also it creates other unnecessary
> > responsibility on the app itself. You are working on containerizing
> > Foreman, so it is also actually against this effort.
> >
> > Anyway, let me throw another integration. Collectd has an agent (or
> > plugin) that opens a local socket which can be used to receive data
> > from other applications. I wrote Ruby client library the other day
> > (https://github.com/lzap/collectd-uxsock) but I believe this make no
> > difference than statsd - you still need a local process to gather the
> > data.
> >
> > --
> > Later,
> >   Lukas @lzap Zapletal
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "foreman-dev" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to foreman-dev+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "foreman-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to foreman-dev+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Eric D. Helms
Red Hat Engineering

-- 
You received this message because you are subscribed to the Google Groups 
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to