"scheme" is already implemented. This works today:
{
'targets': [<targets>]
'labels': [<labels>, '__scheme__': 'https', '__metrics_path__':
'/metrics']
}
Prometheus 2.30.0 adds __scrape_interval__ and __scrape_timeout__.
Therefore, I'd expect any further scrape config control to be done via
labels, not via some change to the service discovery API.
It's already been rejected that any values which are "secret" should be
stored directly in labels. In principle, I don't see why a label couldn't
reference a config block or external file which contains scrape config
stanzas (e.g. basic_auth, authorization, oauth2, tls_config) but you'd have
to argue your use case with the prometheus maintainers.
On Wednesday, 15 September 2021 at 02:06:34 UTC+1 [email protected] wrote:
> I was envisioning expanding what can be defined in a 'static_config',
> rather than strictly relying on relabel rules. For example, exposing
> something like:
>
> {
> 'targets': [<targets>],
> 'labels': [<labels>],
> 'auth': <basic, bearer etc>
> 'scheme': <http, https>
> }
>
> This is getting close to allowing service discovery mechanisms to expose
> scrape configs rather than static_configs. I wonder if this would be a
> useful addition to the service discovery API?
>
> Cheers,
> Ben
>
> On Mon, Sep 13, 2021 at 4:57 PM Brian Candler <[email protected]> wrote:
>
>> As I said, scheme and metrics path are already exposed to service
>> discovery, via relabelling.
>>
>> However, the idea of putting "secret" values in labels has been rejected:
>> https://groups.google.com/g/prometheus-users/c/EbQ10HDRHso/m/OuCHKWa0AgAJ
>>
>> On Sunday, 12 September 2021 at 23:57:49 UTC+1 [email protected] wrote:
>>
>>> Thanks for the info Brian.
>>>
>>> It's a shame about auth being the main blocker here. I wonder if there's
>>> an opportunity here to allow the service discovery mechanism to expose some
>>> more of these details such as auth, scheme, and metrics path. Searching
>>> back through the mailing list it appears I'm not the first person to
>>> encounter this, and while the scrape proxy is a great idea it'd be nice to
>>> have this directly accessible via service discovery.
>>>
>>> Cheers,
>>> Ben
>>>
>>> On Friday, September 10, 2021 at 5:45:24 PM UTC+10 Brian Candler wrote:
>>>
>>>> You can select http/https using the special label __scheme__, and the
>>>> URL path with __metrics_path__. Rewriting rules can be made conditional
>>>> by matching on extra labels. (Rewrite rules can have a source with
>>>> multiple labels. The values are joined together, by default with
>>>> semicolon
>>>> but you can choose something else. You then match a regex on the whole
>>>> combined string)
>>>>
>>>> Unfortunately, authentication can only be set at the scrape job level,
>>>> and that will be your stumbling block.
>>>>
>>>> You might think about writing a HTTP proxy for the scrapes, which takes
>>>> parameters like target=http%3a%2f%2fx.x.x.x%2fmetrics&auth=secret1.
>>>> "auth" could then be a key which looks up the credentials in a separate
>>>> YAML file: e.g.
>>>>
>>>> secret1:
>>>> basic_auth:
>>>> username: foo
>>>> password: bar
>>>>
>>>> You'd use relabelling to send all the scrapes to the proxy, as you'd do
>>>> with blackbox_exporter or snmp_exporter. The target and auth parameters
>>>> can be set from labels in the file SD.
>>>>
>>>> On Friday, 10 September 2021 at 05:58:23 UTC+1 [email protected]
>>>> wrote:
>>>>
>>>>>
>>>>> Hi all,
>>>>>
>>>>> I'm currently trying to work on a custom service discovery integration
>>>>> for an in-house database that contains lots of info about potential
>>>>> scrape
>>>>> targets. So far, I've been trying to use the file-based service discovery
>>>>> which I'm constructing using a Python script.
>>>>>
>>>>> The main issue I'm having is that lots of these scrape targets require
>>>>> basic authentication using unique credentials, or differ on http/https,
>>>>> and
>>>>> some of them require specific relabel rules to achieve the desired
>>>>> outcome.
>>>>> Most of this information is set at the 'scrape_config' level, but service
>>>>> discovery works at the 'static_configs' level, allowing only targets and
>>>>> labels to be discovered.
>>>>>
>>>>> Does anyone have a pattern for dynamically providing Prometheus with
>>>>> things like auth credentials, relabel rules etc alongside targets? My
>>>>> only
>>>>> thought so far is to just manage the entire Prometheus config file inside
>>>>> this script, and periodically update it with new information from the
>>>>> database. I'd like to avoid that though because this file is already
>>>>> managed by an existing configuration management pipeline.
>>>>>
>>>>> Cheers,
>>>>> Ben
>>>>>
>>>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Prometheus Users" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/prometheus-users/_cC9juJkIaU/unsubscribe
>> .
>> To unsubscribe from this group and all its topics, send an email to
>> [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-users/b1e978af-ebb4-4605-991b-6431f95e7ea5n%40googlegroups.com
>>
>> <https://groups.google.com/d/msgid/prometheus-users/b1e978af-ebb4-4605-991b-6431f95e7ea5n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/37bf1bc7-1193-4d29-93a8-fdab3ef2dd7dn%40googlegroups.com.