Hi Sean, unfortunately since I have rebooted the server the results of that 
query now look normal:

2016-08-31T00:01:00Z  562
2016-08-31T00:02:00Z  535
2016-08-31T00:03:00Z  490
2016-08-31T00:04:00Z  543
2016-08-31T00:05:00Z  552
2016-08-31T00:06:00Z  555
2016-08-31T00:07:00Z  628
2016-08-31T00:08:00Z  632
2016-08-31T00:09:00Z  623
2016-08-31T00:10:00Z  581
2016-08-31T00:11:00Z  584
2016-08-31T00:12:00Z  576
2016-08-31T00:13:00Z  625
2016-08-31T00:14:00Z  643
2016-08-31T00:15:00Z  595

I ran a similar query yesterday however and the results for the last minute 
were consistently in the 1-200 range instead of 5-600 where they should 
have been. By querying recent results in a 1-minute window (eg. between 
70-10 seconds ago, 75-15 seconds ago, 80-20 seconds ago etc) I was able to 
narrow down that it was taking ~50s for all of the raw results to be 
inserted (ie. 'time > now() - 110s and time <= now() - 50s' returned a 
correct-looking result). Rebooting the server resolved this problem 
(killing and restarting InfluxDB without reboot didn't), and raw results 
are now appearing again without lag.

So the issue would appear to be not that the CQs were lagging, but that the 
raw inserts themselves were lagging. Now that the server has been rebooted 
the problem is resolved, so I guess all I can do is keep an eye on it and 
get back to you if it happens again, unless you have any other suggestions?

Thanks


On Wednesday, August 31, 2016 at 1:36:14 AM UTC+9, Sean Beckett wrote:
>
> I'm not sure why the CQs would have started lagging, but in case it starts 
> to happen again, you can set the CQs to recalculate prior intervals, too. 
> That will help with the backfill:
>
> CREATE CONTINUOUS QUERY router_rpm_1m_sum ON ac54edda_6a34_4b8b_99d3_
> a949fb3c8994 
> *RESAMPLE FOR 5m*
> BEGIN
> SELECT sum(value) INTO ac54edda_6a34_4b8b_99d3_a949fb3c8994.retention_4w."
> router.rpm.1m.sum"
> FROM ac54edda_6a34_4b8b_99d3_a949fb3c8994.retention_1d."router.rpm"
> GROUP BY time(1m) END
>
> That will cause the CQ to recalculate the 1 minute buckets for the prior 
> five minutes each time it runs. 
>
> However, if the CQs are lagging because they can't execute in time, that 
> will just make the issue worse.
>
> What are the results of "SELECT COUNT(value) FROM ac54edda_6a34_4b8b_99d3_
> a949fb3c8994.retention_1d."router.rpm" WHERE time > now() - 15m GROUP BY 
> time(1m)"?
>
>
> On Tue, Aug 30, 2016 at 5:23 AM, <[email protected] <javascript:>> 
> wrote:
>
>> After further investigation, it seems that more than half of the inserts 
>> were lagging by up to 1min for some reason. Since continuous queries don't 
>> backfill, the continuous query sums were low, but checking the raw data 
>> showed the correct numbers since the data had appeared there later.
>>
>> Rebooting the InfluxDB server has corrected the issue and the numbers are 
>> now correct again, but I'm curious what would cause this kind of insert lag.
>>
>> On Tuesday, August 30, 2016 at 11:23:23 AM UTC+9, dave wrote:
>> > Hi, I've noticed recently that at least one of my continuous queries 
>> doesn't contain all of the data that the raw series contains. See 
>> http://imgur.com/a/R76G6 for an example comparison of the raw data vs. 
>> the continuous query data. The continuous query is defined as follows:
>> >
>> > CREATE CONTINUOUS QUERY router_rpm_1m_sum ON 
>> ac54edda_6a34_4b8b_99d3_a949fb3c8994 BEGIN
>> > SELECT sum(value) INTO 
>> ac54edda_6a34_4b8b_99d3_a949fb3c8994.retention_4w."router.rpm.1m.sum"
>> > FROM ac54edda_6a34_4b8b_99d3_a949fb3c8994.retention_1d."router.rpm"
>> > GROUP BY time(1m) END
>> >
>> > This wasn't always the case - up until several weeks ago it was 
>> recording the correct data. I noticed a dip in our traffic graph, and 
>> assumed that traffic had decreased, but recently checking the raw 
>> (non-continuous) data I discovered that this was not the case. 
>> Unfortunately the correct data has been deleted due to the retention 
>> policy, so I can no longer compare it.
>> >
>> > Server load seems low (load average 0.09, CPU ~2%, 28% memory used, 13% 
>> disk space used), and restarting InfluxDB hasn't helped.
>> >
>> > I'm running v0.13.0 on Ubuntu Ubuntu 14.04.4 LTS (
>> https://s3.amazonaws.com/dl.influxdata.com/influxdb/releases/influxdb_0.13.0_amd64.deb
>> ).
>> >
>> > Any idea what is going on here, or what my next steps might be to 
>> diagnose and fix the issue?
>> >
>> > Thanks
>>
>> --
>> Remember to include the InfluxDB version number with all issue reports
>> ---
>> You received this message because you are subscribed to the Google Groups 
>> "InfluxDB" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/influxdb.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/influxdb/fe63a7ac-afac-4913-8f6b-00cc1782c9ed%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Sean Beckett
> Director of Support and Professional Services
> InfluxDB
>

-- 
Remember to include the InfluxDB version number with all issue reports
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/9b4b7dc3-0066-4dda-891b-06c098e8bec9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to