[prometheus-users] Re: Best practive: "job_name in prometheus agent? Same job_name allowed ?

2024-03-15 Thread 'Brian Candler' via Prometheus Users
> What would you recommend in a situation with several hundredes or 
thousands of servers or systems within a kubernetes cluster which should 
have the node_exporter installed.

I would just scrape them normally, using service discovery to identify the 
nodes to be scraped.  Implicitly you're saying you can't, or don't want to, 
do that.

> then remotw_writes the results to a central prometheus server or a 
loadbalancer which distributes to different prometheus servers.

Definitely don't remote_write to a load balancer; it will be 
non-deterministic which node receives each data point. If you want to load 
share, then statically configure some nodes to point to different 
prometheus instances. If one server goes down then fix it, and remote_write 
should buffer in the mean time.

If you can't stand the idea of losing access to metrics for a short period 
of time, then you could remote_write to multiple servers, and use promxy to 
merge them when querying. But really I think you're adding a lot of cost 
and complexity for little gain.
 
> However I think I will have a problem because if I use "127.0.0.1:9100" 
as target to scrape then all instances are equal.

The instance label does not necessarily have to be the same as the 
"__address__" that you scrape. If you've set the instance label explicitly, 
then prometheus won't change it. But you would have to ensure that each 
host knows its unique name and puts it into the instance label.

> Is there any possibility to use a variable in the scrape_config which 
reflects any environment variable from linux system or any other mechanism 
to make this instance unique?

I've never had to do this, but you could 
try --enable-feature=expand-external-labels
See 
https://prometheus.io/docs/prometheus/latest/feature_flags/#expand-environment-variables-in-external-labels
Then you could leave instance="127.0.0.1:9100" but add another label which 
identifies the node.

On Thursday 14 March 2024 at 21:59:52 UTC Alexander Wilke wrote:

> Thanks for your response.
>
> What would you recommend in a situation with several hundredes or 
> thousands of servers or systems within a kubernetes cluster which should 
> have the node_exporter installed.
> my idea was to install the node_exporter + prometheus agent. agent scrapes 
> local node_exporter and then remotw_writes the results to a central 
> prometheus server or a loadbalancer which distributes to different 
> prometheus servers.
> my idea was to user the same configu for alle node_exporter + prometheus 
> agents. For that reason they all have the same job name which would be ok.
>
> However I think I will have a problem because if I use "127.0.0.1:9100" 
> as target to scrape then all instances are equal.
>
> Is there any possibility to use a variable in the scrape_config which 
> reflects any environment variable from linux system or any other mechanism 
> to make this instance unique?
>
>
> Brian Candler schrieb am Donnerstag, 14. März 2024 um 13:04:07 UTC+1:
>
>> As long as all the time series have distinct label sets (in particular, 
>> different "instance" labels), and you're not mixing scraping with 
>> remote-writing for the same targets, then I don't see any problem with all 
>> the agents using the same "job" label when remote-writing.
>>
>> On Tuesday 12 March 2024 at 22:30:22 UTC Alexander Wilke wrote:
>>
>>> At the moment I am running the job with name
>>> "node_exporter" which has 20 different targets. (instances)
>>> With this configuration there should not be any conflict.
>>>
>>> my idea is to install the prometheus agent on the nodes itself.
>>> technically it looks like it work if I use the same job_name on the 
>>> agent and central prometheus as long as the targets/instances are different.
>>>
>>> In general I avoid conflicting job_names but in this situation it may be 
>>> ok from my point of view.
>>>
>>> what do you think, recommend in this specific scenario ?
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/dfc394b9-5f81-4173-b832-1c7d06702f28n%40googlegroups.com.


[prometheus-users] Re: Best practive: "job_name in prometheus agent? Same job_name allowed ?

2024-03-14 Thread Alexander Wilke
Thanks for your response.

What would you recommend in a situation with several hundredes or thousands 
of servers or systems within a kubernetes cluster which should have the 
node_exporter installed.
my idea was to install the node_exporter + prometheus agent. agent scrapes 
local node_exporter and then remotw_writes the results to a central 
prometheus server or a loadbalancer which distributes to different 
prometheus servers.
my idea was to user the same configu for alle node_exporter + prometheus 
agents. For that reason they all have the same job name which would be ok.

However I think I will have a problem because if I use "127.0.0.1:9100" as 
target to scrape then all instances are equal.

Is there any possibility to use a variable in the scrape_config which 
reflects any environment variable from linux system or any other mechanism 
to make this instance unique?


Brian Candler schrieb am Donnerstag, 14. März 2024 um 13:04:07 UTC+1:

> As long as all the time series have distinct label sets (in particular, 
> different "instance" labels), and you're not mixing scraping with 
> remote-writing for the same targets, then I don't see any problem with all 
> the agents using the same "job" label when remote-writing.
>
> On Tuesday 12 March 2024 at 22:30:22 UTC Alexander Wilke wrote:
>
>> At the moment I am running the job with name
>> "node_exporter" which has 20 different targets. (instances)
>> With this configuration there should not be any conflict.
>>
>> my idea is to install the prometheus agent on the nodes itself.
>> technically it looks like it work if I use the same job_name on the agent 
>> and central prometheus as long as the targets/instances are different.
>>
>> In general I avoid conflicting job_names but in this situation it may be 
>> ok from my point of view.
>>
>> what do you think, recommend in this specific scenario ?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/706133c4-aa70-4d60-b1a0-dc0d85bcd5een%40googlegroups.com.


[prometheus-users] Re: Best practive: "job_name in prometheus agent? Same job_name allowed ?

2024-03-14 Thread 'Brian Candler' via Prometheus Users
As long as all the time series have distinct label sets (in particular, 
different "instance" labels), and you're not mixing scraping with 
remote-writing for the same targets, then I don't see any problem with all 
the agents using the same "job" label when remote-writing.

On Tuesday 12 March 2024 at 22:30:22 UTC Alexander Wilke wrote:

> At the moment I am running the job with name
> "node_exporter" which has 20 different targets. (instances)
> With this configuration there should not be any conflict.
>
> my idea is to install the prometheus agent on the nodes itself.
> technically it looks like it work if I use the same job_name on the agent 
> and central prometheus as long as the targets/instances are different.
>
> In general I avoid conflicting job_names but in this situation it may be 
> ok from my point of view.
>
> what do you think, recommend in this specific scenario ?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/24268b8d-0313-44d5-8227-3be95eaacde7n%40googlegroups.com.