I see, ok. But the needed parsing functionality would be similar as for
strictly ensured multi tenancy. Unless you want users to insert
$filter_labels placeholders everywhere in their expressions where there is
a selector, like you can do with template variables in Grafana.

On Thu, Apr 23, 2020 at 5:00 PM Severyn Lisovskyi <[email protected]>
wrote:

> Not really, I'm not talking about ensuring tenants don't see metrics they
> are not supposed to, but giving Prometheus users ability to write common
> labels only once in expression for all the metrics no matter how deep they
> are buried.
>
> For sample queries it not a big win but in lager scales any external
> application can wrap prometheus queries before submitting them to
> Prometheus into with_labels() function. E.g. prometheus operator wrap all
> the queries (only if specific field set in PrometheusRuleSpec) to
> with_labels(namespace="a", app="b")(expression). Also I can think of cases
> setting label filters for whole grafana dashboard.
>
>
>
> On Thursday, April 23, 2020 at 4:05:19 PM UTC+2, Julius Volz wrote:
>>
>> So basically, you want to take incoming user queries and ensure that any
>> series selectors have certain user-specific tenant labels set, no matter
>> how deep they are buried in an expression. That is indeed not easily
>> solvable by a simple templating solution. You're right, you need to
>> properly parse PromQL for that. There are some people who have built
>> gateways that do the parsing and ensuring of labels, like for example
>> https://github.com/kfdm/promql-guard (I do not know this well, I've just
>> seen it, but can't vouch for it, but the general approach makes sense).
>>
>> Prometheus itself was never meant to be multi-tenant, and introducing
>> such a feature would only solve a small part of multi-tenancy in a specific
>> way. For example, it doesn't address anything on the data ingestionside, or
>> performance/resource isolations. Thus I assume most of the team would
>> currently not choose to add it to Prometheus itself, but ask people to
>> implement it externally, like with a gateway mentioned above.
>>
>> On Thu, Apr 23, 2020 at 3:20 PM Severyn Lisovskyi <[email protected]>
>> wrote:
>>
>>> In my opinion we should not rely on PromQL engine optimisations where
>>> this optimisations may be omitted by better written expression. Sorry for
>>> being picky :)
>>>
>>> You answered my question in this topic, and I agree with you, but we
>>> drifted a bit from original problem in github feature request. Let me
>>> rephrase it a bit to make it more clear:
>>>
>>> *I want to be able to provide my k8s tenants same level of isolation by
>>> default in Prometheus(using namespace label) as it is in Kubernetes (by
>>> namespace). *
>>>
>>> Proposed with_labels parameter in prometheus rules and with_labels
>>> PromQL function is only suggested option to resolve this issue. I know
>>> Prometheus is not limited to k8s world - but still for other use cases this
>>> feature will be useful.
>>>
>>> I looked into solutions like jsonnet but I don't see any other way to do
>>> this without parsing PromQL
>>>
>>>
>>> On Thursday, April 23, 2020 at 2:35:16 PM UTC+2, Brian Candler wrote:
>>>>
>>>> Sure, both LHS and RHS there return an unnamed timeseries with no
>>>> labels.
>>>>
>>>> Are you aiming to do something like this?
>>>> sum(up{job="node_exporter", fqdn=~"n.*", namespace="foo"}) /
>>>> sum(up{job="node_exporter", namespace="foo"})
>>>>
>>>> You can get sum() to return results with labels, using sum by:
>>>>
>>>> sum by (namespace,job) (up{job="node_exporter", namespace="foo",
>>>> fqdn=~"n.*"}) / sum by (namespace, job) (up)
>>>>
>>>> I agree it's not shorter, and it depends on PromQL engine optimisations
>>>> if it's going to be equally efficient, but it does at least move the
>>>> logical conditions into one place.
>>>>
>>>> It does let you get the results for all namespaces at once if you want:
>>>>
>>>> sum by (namespace,job) (up{job="node_exporter", fqdn=~"n.*"}) / sum by
>>>> (namespace, job) (up)
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-users/193091bf-22e1-45d3-b141-9d8ca528a1a5%40googlegroups.com
>>> <https://groups.google.com/d/msgid/prometheus-users/193091bf-22e1-45d3-b141-9d8ca528a1a5%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/36fe8baf-d59d-44ab-bd5d-82b885be1b30%40googlegroups.com
> <https://groups.google.com/d/msgid/prometheus-users/36fe8baf-d59d-44ab-bd5d-82b885be1b30%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CA%2BT6YoyqQv_V%2B-dT0DF7-fev8xFLGnJWXfXqhJ5p-EJUa%3DCFQg%40mail.gmail.com.

Reply via email to