Perform the two halves of the query separately, i.e.
max_over_time(node_filesystem_avail_bytes{...}[1h]
max_over_time(node_filesystem_size_bytes{...}[1h]

and then you'll see why they divide to give 48% instead of 97%

I expect node_filesystem_size_bytes doesn't change much, so max_over_time 
doesn't do much for that. But max_over_time(node_filesystem_avail_bytes) 
will show the *largest* available space over that 1 hour window, and 
therefore you'll get the value for when the disk was *least full*. If you 
want to know the value when it was *most full* then it would be 
min_over_time(node_filesystem_avail_bytes).

Note that you showed a graph, rather than a table. When you're graphing, 
you're repeating the same query at different evaluation times. So where the 
time axis show 04:00, the data point on the graph is for the 1 hour period 
from 03:00 to 04:00. Where the time axis is 04:45, the result is of your 
query covering the 1 hour from 03:45 to 04:45. 

Aside: in general, I'd advise keeping percentage queries simple by removing 
the factor of 100, so you get a fraction between 0 and 1 instead. This can 
be represented as a human-friendly percentage when rendered (e.g. Grafana 
can quite happily render 0-1 as 0-100%)

On Friday 27 September 2024 at 06:01:16 UTC+1 mohan garden wrote:

> Sorry for the double posting, image was corrupted, so reposting 
>
> Thank you for the response Brian,
>
> I removed the $__ variables and tried viewing disk usage metrics from past 
> 1 hour in PromUI -
> I tried the query in the Prometheus UI , and i was expecting value ~97% 
> with following query for past 1 hour metrics but the table view reports 
> 48%. 
>
> [image: max_over_time.png]
> I am not sure if i missed out on some thing within the query.
>
> i am under impression that max function works with multiple series,  and 
> over time will generate stats from the values within the series.
> Please advice.
>
>
>
>
>
>
> On Friday, September 27, 2024 at 10:29:22 AM UTC+5:30 mohan garden wrote:
>
>> Thank you for the response Brian,
>>
>> I removed the $__ variables and tried viewing disk usage metrics from 
>> past 1 hour in PromUI -
>>  
>>
>> I tried the query in the Prometheus UI , and i was expecting value ~97%  
>>  for past 1 hour metrics but the table view reports 48%. 
>>
>> [image: max_over_time.png]
>> I am not sure if i missed out on something within the query.
>>
>> i am under impression that max function works with multiple series,  and 
>> over time will generate stats from the values within the series.
>> Please advice.
>>
>>
>>
>> On Tuesday, September 24, 2024 at 8:12:29 PM UTC+5:30 Brian Candler wrote:
>>
>>> $__rate_interval is (roughly speaking) the interval between 2 adjacent 
>>> points in the graph, with a minimum of 4 times the configured scrape 
>>> interval. It's not the entire period over which Grafana is drawing the 
>>> graph. You probably want $__range or $__range_s. See:
>>>
>>> https://grafana.com/docs/grafana/latest/datasources/prometheus/template-variables/#use-__rate_interval
>>>
>>> https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/#global-variables
>>>
>>> However, questions about Grafana would be better off asked in the 
>>> Grafana community. Prometheus is not Grafana, and those variables are 
>>> Grafana-specific.
>>>
>>> > so you can see that avg|min|max_over_time functions return identical 
>>> values which dont make much sense
>>>
>>> It makes sense when you realise that the time period you're querying 
>>> over is very small; hence for a value that doesn't change rapidly, the 
>>> min/max/average over such a short time range will all be roughly the same.
>>>
>>> On Tuesday 24 September 2024 at 15:10:33 UTC+1 mohan garden wrote:
>>>
>>>> Hi , 
>>>> seems images in my previous post did not show up as expected.
>>>> Sorry for the spam , reposting again  - 
>>>>
>>>>
>>>> Hi , 
>>>>
>>>> I am trying to analyse memory usage of a server for 2 specific months 
>>>> using Grafana and prometheus. but seems _over_time functions are returning 
>>>> unexpected results.
>>>>
>>>> Here is the data for the duration
>>>>
>>>> [image: one.png]
>>>>
>>>> Now, the summary table shows expected values
>>>> [image: one.png]
>>>>
>>>>
>>>> query -
>>>> (( 
>>>> node_memory_MemAvailable_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>  
>>>> * 100 ) / 
>>>> node_memory_MemTotal_bytes{instance="$node",job="$job"}[$__rate_interval]
>>>>
>>>>
>>>> Issue - when i am trying to create similar stats using PromQL at my end 
>>>> , i am facing issues . i fail to get the same stats when i use the 
>>>> following promql , example -
>>>>
>>>> [image: two.png]
>>>>
>>>> ( 
>>>> avg_over_time(node_memory_MemAvailable_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>  
>>>> / 
>>>> avg_over_time(node_memory_MemTotal_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>  
>>>> ) * 100
>>>>
>>>> ( 
>>>> min_over_time(node_memory_MemAvailable_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>  
>>>> / 
>>>> min_over_time(node_memory_MemTotal_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>  
>>>> ) * 100
>>>>
>>>> ( 
>>>> max_over_time(node_memory_MemAvailable_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>  
>>>> / 
>>>> max_over_time(node_memory_MemTotal_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>  
>>>> ) * 100
>>>>
>>>> so you can see that avg|min|max_over_time functions return identical 
>>>> values which dont make much sense. I was using  following setting
>>>>
>>>> [image: one.png]
>>>>
>>>> I tried changing from range -> instant, i see similar values
>>>> [image: two.png]
>>>>
>>>> Where do i need to make modifications in PromQL so i can get the 
>>>> correct min/max/avg values in the gauges as correctly reported by the
>>>> [image: one.png]
>>>>
>>>>
>>>> for a specific duration , say - 
>>>>
>>>> [image: one.png]
>>>>
>>>> please advice
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tuesday, September 24, 2024 at 7:25:00 PM UTC+5:30 mohan garden 
>>>> wrote:
>>>>
>>>>> I am trying to analyse memory usage of a server for 2 specific months 
>>>>> using Grafana and prometheus. but seems _over_time functions are 
>>>>> returning 
>>>>> unexpected results.
>>>>>
>>>>> Here is the data for the duration
>>>>> [image: image] 
>>>>> <https://private-user-images.githubusercontent.com/154580421/370293312-9755f80b-01f4-4785-8d94-5d4e55b3cb0b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjcxODYwMjMsIm5iZiI6MTcyNzE4NTcyMywicGF0aCI6Ii8xNTQ1ODA0MjEvMzcwMjkzMzEyLTk3NTVmODBiLTAxZjQtNDc4NS04ZDk0LTVkNGU1NWIzY2IwYi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwOTI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDkyNFQxMzQ4NDNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1iNmUzNjQ2Yzk5MjllZWQwYjhjOWZjZjk4MTNmYTFjN2ZkNDE3YzA2NDRiOWZiMzdkY2FkYWYyZGJkNDU4M2ZiJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.bVjNK3XE9QLGybgL6GUbJ-VHvUMk-0m4yJIsHEAeGo4>
>>>>> the summary table shows expected values
>>>>> [image: image] 
>>>>> <https://private-user-images.githubusercontent.com/154580421/370293517-6a3d94db-f0eb-4915-a0d5-b9d235e5a944.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjcxODYwMjMsIm5iZiI6MTcyNzE4NTcyMywicGF0aCI6Ii8xNTQ1ODA0MjEvMzcwMjkzNTE3LTZhM2Q5NGRiLWYwZWItNDkxNS1hMGQ1LWI5ZDIzNWU1YTk0NC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwOTI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDkyNFQxMzQ4NDNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1kMzVhNTE5ZTQ1Nzg2MDhmMDcyYWI1YjI1MGQ0YTkwMDZhNDJlYjYyODQ2Mjg1MWRiNWJkNGVmOTMyNjE0MjA1JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.POaO2I4CLHlxSH9gzIbF8W2Hl3mAcorBj_FjQlR4SOQ>
>>>>> query -
>>>>> (( 
>>>>> node_memory_MemAvailable_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>>  
>>>>> * 100 ) / 
>>>>> node_memory_MemTotal_bytes{instance="$node",job="$job"}[$__rate_interval]
>>>>>
>>>>>
>>>>> Issue - when i am trying to create similar stats using PromQL at my 
>>>>> end , i am facing issues . i fail to get the same values when i use 
>>>>> promql 
>>>>> , example -
>>>>>
>>>>> [image: image] 
>>>>> <https://private-user-images.githubusercontent.com/154580421/370295234-12eeea9c-a974-4a08-b21a-bb9b6bfbfc87.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjcxODYwMjMsIm5iZiI6MTcyNzE4NTcyMywicGF0aCI6Ii8xNTQ1ODA0MjEvMzcwMjk1MjM0LTEyZWVlYTljLWE5NzQtNGEwOC1iMjFhLWJiOWI2YmZiZmM4Ny5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwOTI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDkyNFQxMzQ4NDNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT01NzY2MjE1MzNmMmM0ZjRkZWYyYWU3Y2ViOTFjNTk3NTJlYjNlN2Q3OTMzMTY0MzhiOTllODY2ZDIxNzExYjVkJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.WPjDSluZWsNcTbCVDVnoTB3wa3CSdFgBXRq7bxu5HS0>
>>>>>
>>>>> ( 
>>>>> avg_over_time(node_memory_MemAvailable_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>>  
>>>>> / 
>>>>> avg_over_time(node_memory_MemTotal_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>>  
>>>>> ) * 100
>>>>>
>>>>> ( 
>>>>> min_over_time(node_memory_MemAvailable_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>>  
>>>>> / 
>>>>> min_over_time(node_memory_MemTotal_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>>  
>>>>> ) * 100
>>>>>
>>>>> ( 
>>>>> max_over_time(node_memory_MemAvailable_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>>  
>>>>> / 
>>>>> max_over_time(node_memory_MemTotal_bytes{instance="$node",job="$job"}[$__rate_interval])
>>>>>  
>>>>> ) * 100
>>>>>
>>>>> so you can see that avg|min|max_over_time functions return identical 
>>>>> values with following setting
>>>>> [image: image] 
>>>>> <https://private-user-images.githubusercontent.com/154580421/370295926-18b0a6a3-fea5-48b9-8f09-715df5725f34.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjcxODYwMjMsIm5iZiI6MTcyNzE4NTcyMywicGF0aCI6Ii8xNTQ1ODA0MjEvMzcwMjk1OTI2LTE4YjBhNmEzLWZlYTUtNDhiOS04ZjA5LTcxNWRmNTcyNWYzNC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwOTI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDkyNFQxMzQ4NDNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT03MjI5MjI5NDNhODA2ZjhkNjIwNTA3ZjAwM2VlNTlmM2I4NWM5ZjdiNGM5NjExYmE1ZTg2Njg2NzllOTM5YWFjJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.Nbpvcc5P80H3yC3uOHnYHGb7FwpbmMLV4xujhJ6YqR0>
>>>>>
>>>>> when i change from range -> instant, i see similar values
>>>>> [image: image] 
>>>>> <https://private-user-images.githubusercontent.com/154580421/370296399-fbba4ab2-7efb-40d0-a067-3705ddbc111e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjcxODYwMjMsIm5iZiI6MTcyNzE4NTcyMywicGF0aCI6Ii8xNTQ1ODA0MjEvMzcwMjk2Mzk5LWZiYmE0YWIyLTdlZmItNDBkMC1hMDY3LTM3MDVkZGJjMTExZS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwOTI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDkyNFQxMzQ4NDNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0wYzk1OTczOGVmMGVlNWFjYjY2NDkwMzQ0MzVjNzc5NzI3YzcwNzk3MmRkYTJlMzhiMzJhNDNmNzI0NjlkYTI1JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.-q5VtuzgNYlX5_r9nui9UJWLyFUTMU3H7Q8I7l0giFk>
>>>>>
>>>>> Where do i need to make modifications in PromQL so i can get the 
>>>>> correct min/max/avg values in the gauges as reported by the
>>>>> [image: image] 
>>>>> <https://private-user-images.githubusercontent.com/154580421/370293517-6a3d94db-f0eb-4915-a0d5-b9d235e5a944.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjcxODYwMjMsIm5iZiI6MTcyNzE4NTcyMywicGF0aCI6Ii8xNTQ1ODA0MjEvMzcwMjkzNTE3LTZhM2Q5NGRiLWYwZWItNDkxNS1hMGQ1LWI5ZDIzNWU1YTk0NC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwOTI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDkyNFQxMzQ4NDNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1kMzVhNTE5ZTQ1Nzg2MDhmMDcyYWI1YjI1MGQ0YTkwMDZhNDJlYjYyODQ2Mjg1MWRiNWJkNGVmOTMyNjE0MjA1JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.POaO2I4CLHlxSH9gzIbF8W2Hl3mAcorBj_FjQlR4SOQ>
>>>>> for a specific duration , say - 
>>>>> [image: image] 
>>>>> <https://private-user-images.githubusercontent.com/5935825/370298748-b4ecd557-f74d-44da-a33c-3df91a8c199d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjcxODYwMjMsIm5iZiI6MTcyNzE4NTcyMywicGF0aCI6Ii81OTM1ODI1LzM3MDI5ODc0OC1iNGVjZDU1Ny1mNzRkLTQ0ZGEtYTMzYy0zZGY5MWE4YzE5OWQucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDkyNCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA5MjRUMTM0ODQzWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9YjVhZWJhNGM1MjE1MTUwODc5OTgxMmVhOGU2MDJhZTI5Y2EwNWQ3YmRhODRmN2I3MzRjN2E4ZTQzZTQ1MTgyOSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.Qw9GrH1ef_Gj1jAUNtYT2Bz05xZ1YUtVELfYJJP2Cds>
>>>>>
>>>>> please advice
>>>>>
>>>>>
>>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/105535b0-cde9-46cd-81b3-28ba246c99c9n%40googlegroups.com.

Reply via email to