Is it possible that prometheus is ignoring the counter as long as it's 
equal to 1?
I can't be sure but it seems that it only started logging my counter 
because some of them are now higher than 1.
This could be coherent as my system increments the value every 30min, so if 
the condition is met twice it will take up to an hour to have one reach 2.

On Thursday, February 25, 2021 at 11:44:42 AM UTC+1 Constantin Clauzel 
wrote:

> And it just started appearing again.
>
> It took approximately the same time to show up again as last time, around 
> 1h10.
>
>
> On Thursday, February 25, 2021 at 11:37:36 AM UTC+1 Constantin Clauzel 
> wrote:
>
>> Attached file in previous message was broken:
>>
>> On Thursday, February 25, 2021 at 11:32:52 AM UTC+1 Constantin Clauzel 
>> wrote:
>>
>>> Hey,
>>>
>>> Since this morning I'm experiencing some very weird behaviors with one 
>>> of my counters.
>>> It randomly stays at zero for an hour, then appears again, then again to 
>>> zero.
>>>
>>> What is strange is that all other metrics are showing up, meaning 
>>> prometheus can reach the endpoint, and when I check the endpoint is has the 
>>> missing counter in it.
>>>
>>> Is there any possible reason that could explain why a counter suddenly 
>>> only returns zeros, and then starts working again for no apparent reason?
>>>
>>> Please find attached how the graphs look.
>>>
>>> The query that returns all zeros:
>>> sum(increase(appbackend_alerts{metric="TEMP", type="TOO_HIGH"}[5m]))
>>> sum(increase(appbackend_alerts{metric="HUMI", type="TOO_HIGH"}[5m]))
>>>
>>> The prometheus endpoint returns all those appbackend_alerts lines:
>>>
>>> appbackend_alerts{boxID="0",controllerID="xxxxxxxxx",metric="HUMI",type="TOO_HIGH"}
>>>  
>>> 1
>>> appbackend_alerts{boxID="0",controllerID="xxxxxxxxx",metric="HUMI",type="TOO_LOW"}
>>>  
>>> 1
>>> appbackend_alerts{boxID="0",controllerID="xxxxxxxxx",metric="TEMP",type="TOO_LOW"}
>>>  
>>> 1
>>> appbackend_alerts{boxID="0",controllerID="xxxxxxxxx",metric="HUMI",type="TOO_LOW"}
>>>  
>>> 1
>>> appbackend_alerts{boxID="0",controllerID="xxxxxxxxx",metric="HUMI",type="TOO_HIGH"}
>>>  
>>> 1
>>> appbackend_alerts{boxID="0",controllerID="xxxxxxxxx",metric="HUMI",type="TOO_LOW"}
>>>  
>>> 1
>>> appbackend_alerts{boxID="0",controllerID="xxxxxxxxx",metric="HUMI",type="TOO_LOW"}
>>>  
>>> 1
>>> appbackend_alerts{boxID="0",controllerID="xxxxxxxxx",metric="HUMI",type="TOO_HIGH"}
>>>  
>>> 1
>>> [ ... And may more ]
>>>
>>> Thanks,
>>> Constantin
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/5348a16d-f62d-42f8-ba3b-9f36f9ce7bd9n%40googlegroups.com.

Reply via email to