As I said, scheme and metrics path are already exposed to service 
discovery, via relabelling.

However, the idea of putting "secret" values in labels has been rejected:
https://groups.google.com/g/prometheus-users/c/EbQ10HDRHso/m/OuCHKWa0AgAJ

On Sunday, 12 September 2021 at 23:57:49 UTC+1 [email protected] wrote:

> Thanks for the info Brian. 
>
> It's a shame about auth being the main blocker here. I wonder if there's 
> an opportunity here to allow the service discovery mechanism to expose some 
> more of these details such as auth, scheme, and metrics path. Searching 
> back through the mailing list it appears I'm not the first person to 
> encounter this, and while the scrape proxy is a great idea it'd be nice to 
> have this directly accessible via service discovery.
>
> Cheers,
> Ben
>
> On Friday, September 10, 2021 at 5:45:24 PM UTC+10 Brian Candler wrote:
>
>> You can select http/https using the special label __scheme__, and the URL 
>> path with __metrics_path__.   Rewriting rules can be made conditional by 
>> matching on extra labels.  (Rewrite rules can have a source with multiple 
>> labels.  The values are joined together, by default with semicolon but you 
>> can choose something else.  You then match a regex on the whole combined 
>> string)
>>
>> Unfortunately, authentication can only be set at the scrape job level, 
>> and that will be your stumbling block.
>>
>> You might think about writing a HTTP proxy for the scrapes, which takes 
>> parameters like target=http%3a%2f%2fx.x.x.x%2fmetrics&auth=secret1.  
>> "auth" could then be a key which looks up the credentials in a separate 
>> YAML file: e.g.
>>
>> secret1:
>>   basic_auth:
>>     username: foo
>>     password: bar
>>
>> You'd use relabelling to send all the scrapes to the proxy, as you'd do 
>> with blackbox_exporter or snmp_exporter.  The target and auth parameters 
>> can be set from labels in the file SD.
>>
>> On Friday, 10 September 2021 at 05:58:23 UTC+1 [email protected] wrote:
>>
>>>
>>> Hi all, 
>>>
>>> I'm currently trying to work on a custom service discovery integration 
>>> for an in-house database that contains lots of info about potential scrape 
>>> targets. So far, I've been trying to use the file-based service discovery 
>>> which I'm constructing using a Python script.
>>>
>>> The main issue I'm having is that lots of these scrape targets require 
>>> basic authentication using unique credentials, or differ on http/https, and 
>>> some of them require specific relabel rules to achieve the desired outcome. 
>>> Most of this information is set at the 'scrape_config' level, but service 
>>> discovery works at the 'static_configs' level, allowing only targets and 
>>> labels to be discovered.
>>>
>>> Does anyone have a pattern for dynamically providing Prometheus with 
>>> things like auth credentials, relabel rules etc alongside targets? My only 
>>> thought so far is to just manage the entire Prometheus config file inside 
>>> this script, and periodically update it with new information from the 
>>> database. I'd like to avoid that though because this file is already 
>>> managed by an existing configuration management pipeline.
>>>
>>> Cheers,
>>> Ben
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/b1e978af-ebb4-4605-991b-6431f95e7ea5n%40googlegroups.com.

Reply via email to