Hi is there any way how to cancel a query? Jan
Dne středa 13. ledna 2016 20:11:30 UTC+1 Sean Beckett napsal(a): > > The influx CLI is basically a wrapper around curl. Killing it does not > cancel any submitted queries. Currently the only way to cancel a query is > to restart the process. > > On Mon, Jan 11, 2016 at 4:11 PM, Sarath Kamisetty <[email protected] > <javascript:>> wrote: > >> Hi, >> >> I tried a worst case query like "select SUM(field1) from <measurement>" >> (which is probably not realistic given that I have a billion+ data point), >> however, this query didn't return anything for several mins and I had to >> kill the influx client. After this, even simple queries like "show stats" >> take forever (even after waiting for more than 10 mins I don't see any >> output and logs show no activity. This is with 0.10.0-nightly-d9ed54c. Also >> I see that after the above mentioned query, VM usage went up quite a bit, >> although I am not sure exactly how much. >> >> Thanks, >> Sarat >> >> >> On Sun, Jan 10, 2016 at 6:25 PM, Jon Seymour <[email protected] >> <javascript:>> wrote: >> >>> >>> >>> On Monday, 11 January 2016 12:44:58 UTC+11, Todd Persen wrote: >>>> >>>> Jon, >>>> >>>> I’d recommend upgrading to a newer version of InfluxDB. A number of the >>>> issues you’re describing, particularly with deadlocks and long-running >>>> queries, have been fixed since v0.9.2, which is now about 6 months old. >>>> >>>> Let me know if that helps! >>>> >>>> Thanks, >>>> Todd >>>> >>>> >>> I am planning to test the upgrade this afternoon. >>> >>> FWIW: I am reasonably sure that this was an instance of livelock, rather >>> than deadlock. >>> >>> jon. >>> >>> >>> >>>> On Sun, Jan 10, 2016 at 4:00 PM, Jon Seymour <[email protected]> >>>> wrote: >>>> >>>>> I have been having problems recently with influxd (v0.9.2) 'locking >>>>> up', dropping writes and causing queries to hang. >>>>> >>>>> Having looked at the issue in some depth, it is now apparent that the >>>>> reason it occurred is that some grafana originated queries were taking >>>>> longer than 5 minutes to run and were being (automatically) refreshed >>>>> every >>>>> 5 minutes. These queries piled up inside influxd so that at the point I >>>>> killed the server, there were 30 active queries being processed, some >>>>> that >>>>> had been running for 67 minutes. The queries concerned ran in acceptable >>>>> timeframes across several days of data, but not with longer time ranges >>>>> (for example: 30 days). >>>>> >>>>> By analysing the http entries in the influxd log, I was able to show >>>>> that once the minimum number of concurrently active long running queries >>>>> in >>>>> each 5 minute period rose above zero at around 22:50 it never dropped to >>>>> zero again. See https://goo.gl/5GZ5Fd . >>>>> >>>>> Note: The graph does show the minimum number of active queries >>>>> falling, but that is largely an artefact of the log processing technique >>>>> which can't count requests that have not yet finished and hence haven't >>>>> been logged. Analysis of the stack trace captured at the end of the >>>>> graphed >>>>> period shows that there will still 30 requests active at the time the >>>>> server was killed. >>>>> >>>>> The poor query performance is one thing, but the real problem is the >>>>> dropped writes. The reason these occurred is that 14 minutes prior to the >>>>> server being killed, Bolt DB needed to acquire the mmaplock on a shard, >>>>> an >>>>> action which was blocked by the readlocks held by long running queries. >>>>> The >>>>> attempt to acquire the write lock then blocked subsequent queries from >>>>> obtaining access to the readlock, a condition that persisted for 14 >>>>> minutes >>>>> until the server was restarted. During this time 14 minutes, influx >>>>> became >>>>> unresponsive for both queries and writes. >>>>> >>>>> Now, one could argue that I should arrange things so that long running >>>>> queries can never occur. However, this is a difficult constraint to >>>>> enforce >>>>> from the grafana front end (particularly for aggregates). The best I can >>>>> do >>>>> is to remove the definition of the expensive query from the definition of >>>>> the dashboard, but then I lose the benefit of the query for the shorter >>>>> time frames where its run times are acceptable. >>>>> >>>>> The other thing is that most of the intervening HTTP infrastructure, >>>>> whether it is browsers or reverse proxies, times out in less than 2 >>>>> minutes >>>>> so even if long running queries eventually complete, the results they >>>>> produce can't be used anyway. >>>>> >>>>> It seems to me that it would be advantageous if it were possible for >>>>> the server to impose a hard upper bound on the running times of long >>>>> running queries which would then impose an upper limit on the length of >>>>> time that an influxd process becomes unavailable for reads or writes. >>>>> >>>>> Comments? >>>>> >>>>> jon. >>>>> >>>>> -- >>>>> Remember to include the InfluxDB version number with all issue reports >>>>> --- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "InfluxDB" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to [email protected]. >>>>> To post to this group, send email to [email protected]. >>>>> Visit this group at https://groups.google.com/group/influxdb. >>>>> To view this discussion on the web visit >>>>> https://groups.google.com/d/msgid/influxdb/35be53e6-7e20-467b-aedd-4fb703c7ed81%40googlegroups.com >>>>> >>>>> <https://groups.google.com/d/msgid/influxdb/35be53e6-7e20-467b-aedd-4fb703c7ed81%40googlegroups.com?utm_medium=email&utm_source=footer> >>>>> . >>>>> For more options, visit https://groups.google.com/d/optout. >>>>> >>>> >>>> -- >>> Remember to include the InfluxDB version number with all issue reports >>> --- >>> You received this message because you are subscribed to the Google >>> Groups "InfluxDB" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected] <javascript:>. >>> To post to this group, send email to [email protected] >>> <javascript:>. >>> Visit this group at https://groups.google.com/group/influxdb. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/influxdb/629efc0c-9b98-4438-a50b-597721bb305b%40googlegroups.com >>> >>> <https://groups.google.com/d/msgid/influxdb/629efc0c-9b98-4438-a50b-597721bb305b%40googlegroups.com?utm_medium=email&utm_source=footer> >>> . >>> >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- >> Remember to include the InfluxDB version number with all issue reports >> --- >> You received this message because you are subscribed to the Google Groups >> "InfluxDB" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <javascript:>. >> To post to this group, send email to [email protected] >> <javascript:>. >> Visit this group at https://groups.google.com/group/influxdb. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/influxdb/CANxZJk%3DzM%2BPZK6Eo2-GHuPU1knfkd%2B-CSurMETiVaw3Vx%2Bomdg%40mail.gmail.com >> >> <https://groups.google.com/d/msgid/influxdb/CANxZJk%3DzM%2BPZK6Eo2-GHuPU1knfkd%2B-CSurMETiVaw3Vx%2Bomdg%40mail.gmail.com?utm_medium=email&utm_source=footer> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > > > -- > Sean Beckett > Director of Support and Professional Services > InfluxDB > -- Remember to include the InfluxDB version number with all issue reports --- You received this message because you are subscribed to the Google Groups "InfluxDB" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/influxdb. To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/622ce484-b54e-4da2-8be2-09d72033c50e%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
