On Wed, 3 Mar 2021 at 14:15, Stuart Clark <[email protected]> wrote:

> On 03/03/2021 13:00, John Dexter wrote:
> > While I understand the technical definitions I would appreciate some
> > real-world advice in the best way to structure my configuration file
> > for scalability and maintenance.
> >
> > Our system includes many instrumented modules running on an
> > application server on distinct ports, e.g on APPSERVER001 modules [A,
> > B, C, D, E, ..., Z] expose endpoints
> > APPSERVER001:[1001,1002,1003,1004, 1005, ..., 1026]/metrics
> >
> > However, we deploy multiple systems so we actually have APPSERVER001,
> > 002, ... each with the same modules exposing the same endpoints. I
> > might want a dashboard (Grafana) for each system, but I might also
> > want a dashboard displaying data across systems (e.g. show status of
> > module B on all systems)
> > Initially we only want to monitor one system as we get this all up and
> > running, before adding other systems over time.
> > Can anyone suggest how they would approach this in terms of job/scrape
> > config? file-based discovery seems a great option but I'm struggling
> > how to avoid a lot of copy-paste and duplicate information.
> >
> How are your applications deployed and managed? Kubernetes, Docker,
> Ansible, Puppet, etc.?
>
> --
> Stuart Clark
>
>
Stuart, this is totally bespoke C++ Windows system,  none of those :) The
file-discovery seems able to pull out my targets but I can't find a good
example how to structure my files. I suppose each Module A, B, C, ...
should be the jobs, but I'd far rather not have to list each of my
targets/servers once for every job when it's all the same.

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAGJRangL-GJctKCN4QyR9vo7TwUq99OqaGEmZ-ornW0-qi4O-w%40mail.gmail.com.

Reply via email to