Re: [influxdb] Internal server error, timeout and unusable server after large imports

2016-10-12 Thread Sean Beckett
Tonya, when you write the data in ms but don't specify the precision, the
database interprets those millisecond timestamps as nanoseconds, and all
the data is written to a single shard covering Jan 1, 1970.


> insert msns value=42 147633619
> select * from msns
name: msns
--
time value
147633619 42

> precision rfc3339
> select * from msns
name: msns
--
time value
1970-01-01T00:24:36.33619Z 42

That's why everything is fast, because all the data is in one shard.

On Wed, Oct 12, 2016 at 9:50 PM, Tanya Unterberger <
tanya.unterber...@gmail.com> wrote:

> Hi Sean,
>
> I can reproduce all the CPU issues, slowness, etc. if I try to import the
> data that I have in milliseconds, specifying precision as milliseconds.
>
> If I insert the same data without specifying any precision and query
> without specifying any precision, the database is lightingly fast. The same
> data.
>
> The reason I was adding precision=ms is that I thought it was the right
> thing to do. The manual advises that Influx sores the data in nanoseconds
> but to use the lowest precision to insert. So at some stage I even used
> hours, but inserted the data with precision=h. When Influx tried to convert
> that data to nanoseconds, index, etc, then it was having a hissy fit.
>
> Is it a bug or the manual should state that if you query the data at the
> same precision as you insert, then you can go with the lowest precision and
> do not specify what precision you are inserting?
>
> Thanks,
> Tanya
>
> On 13 October 2016 at 10:26, Tanya Unterberger <
> tanya.unterber...@gmail.com> wrote:
>
>> Hi Sean,
>>
>> The data is from 1838 to 2016, daily (sparse at times). We need to retain
>> it, therefore the default policy.
>>
>> Thanks,
>> Tanya
>>
>> On 13 October 2016 at 06:26, Sean Beckett  wrote:
>>
>>> Tanya, what range of time does your data cover? What are the retention
>>> policies on the database?
>>>
>>> On Tue, Oct 11, 2016 at 11:14 PM, Tanya Unterberger <
>>> tanya.unterber...@gmail.com> wrote:
>>>
 Hi Sean,

 1. Initially I killed the process
 2. At some point I restarted influxdb service
 3. Error logs show no errors
 4. I rebuilt the server, installed the latest rpm. Reimported the data
 via scripts. Data goes in, but the server is unusable. Looks like indexing
 might be stuffed. The size of the data in that database is 38M. Total size
 of /var/lib/influxdb/data/ 273M
 5. CPU went beserk and doesn't come down
 6. A query like select count(blah) to the measurement that was batch
 inserted (10k records at a time) is unusable and times out
 7. I need to import around 15 million records. How should I throttle
 that?

 At the moment I am pulling my hair out (not a pretty sight)

 Thanks a lot!
 Tanya

 On 12 October 2016 at 06:11, Sean Beckett  wrote:

>
>
> On Tue, Oct 11, 2016 at 12:11 AM,  wrote:
>
>> Hi,
>>
>> It seems that the old issue might have surfaced again (#3349) in v1.0.
>>
>> I tried to insert a large number of records (3913595) via a script,
>> inserting 1 rows at a time.
>>
>> After a while I received
>>
>> HTTP/1.1 500 Internal Server Error
>> Content-Type: application/json
>> Request-Id: ac8ebbbe-8f70-11e6-8ce7-
>> X-Influxdb-Version: 1.0.0
>> Date: Tue, 11 Oct 2016 05:12:02 GMT
>> Content-Length: 20
>>
>> {"error":"timeout"}
>> HTTP/1.1 100 Continue
>>
>> I killed the process, after which the whole box became pretty much
>> unresponsive.
>>
>
> Killed the InfluxDB process, or the batch writing script process?
>
>
>>
>> There is nothing in the logs (i.e. sudo ls /var/log/influxdb/ gives
>> me nothing) although the setting for http logging is true:
>>
>
> systemd OSes put the logs in a new place (yay!?). See
> http://docs.influxdata.com/influxdb/v1.0/administration/logs/#systemd
> for how to read the logs.
>
>
>>
>> [http]
>>   enabled = true
>>   bind-address = ":8086"
>>   auth-enabled = true
>>   log-enabled = true
>>
>> I tried to restart influx, but got the following error:
>>
>> Failed to connect to http://localhost:8086
>> Please check your connection settings and ensure 'influxd' is running.
>>
>
> The `influx` console is just a fancy wrapper on the API. That error
> doesn't mean much except that the HTTP listener in InfluxDB is not yet up
> and running.
>
>
>>
>> Although I can see that influxd is up an running:
>>
>> > systemctl | grep influx
>> influxdb.service
>> loaded active running   InfluxDB is an open-source,
>> distributed, time series database
>>
>> What do I do now?
>>
>
> Check the logs as referenced above.
>
> 

Re: [influxdb] Internal server error, timeout and unusable server after large imports

2016-10-12 Thread Sean Beckett
That's the entire source of the issue. The system is creating 1 week shards
from 1838 to now. That's a bit over 9000 shard groups, each of which only
has a few hundred points. The shard files are incredibly sparse, and the
overhead for each one is fixed.

Use shards durations of 10 years or more. That way each shard will have >
200k points and there will only be 18 or fewer shards.

Eliminate the duplicate points if you can, but with longer shard durations
the system should be much more performant, and the overwrites may not be an
issue.

On Wed, Oct 12, 2016 at 5:26 PM, Tanya Unterberger <
tanya.unterber...@gmail.com> wrote:

> Hi Sean,
>
> The data is from 1838 to 2016, daily (sparse at times). We need to retain
> it, therefore the default policy.
>
> Thanks,
> Tanya
>
> On 13 October 2016 at 06:26, Sean Beckett  wrote:
>
>> Tanya, what range of time does your data cover? What are the retention
>> policies on the database?
>>
>> On Tue, Oct 11, 2016 at 11:14 PM, Tanya Unterberger <
>> tanya.unterber...@gmail.com> wrote:
>>
>>> Hi Sean,
>>>
>>> 1. Initially I killed the process
>>> 2. At some point I restarted influxdb service
>>> 3. Error logs show no errors
>>> 4. I rebuilt the server, installed the latest rpm. Reimported the data
>>> via scripts. Data goes in, but the server is unusable. Looks like indexing
>>> might be stuffed. The size of the data in that database is 38M. Total size
>>> of /var/lib/influxdb/data/ 273M
>>> 5. CPU went beserk and doesn't come down
>>> 6. A query like select count(blah) to the measurement that was batch
>>> inserted (10k records at a time) is unusable and times out
>>> 7. I need to import around 15 million records. How should I throttle
>>> that?
>>>
>>> At the moment I am pulling my hair out (not a pretty sight)
>>>
>>> Thanks a lot!
>>> Tanya
>>>
>>> On 12 October 2016 at 06:11, Sean Beckett  wrote:
>>>


 On Tue, Oct 11, 2016 at 12:11 AM,  wrote:

> Hi,
>
> It seems that the old issue might have surfaced again (#3349) in v1.0.
>
> I tried to insert a large number of records (3913595) via a script,
> inserting 1 rows at a time.
>
> After a while I received
>
> HTTP/1.1 500 Internal Server Error
> Content-Type: application/json
> Request-Id: ac8ebbbe-8f70-11e6-8ce7-
> X-Influxdb-Version: 1.0.0
> Date: Tue, 11 Oct 2016 05:12:02 GMT
> Content-Length: 20
>
> {"error":"timeout"}
> HTTP/1.1 100 Continue
>
> I killed the process, after which the whole box became pretty much
> unresponsive.
>

 Killed the InfluxDB process, or the batch writing script process?


>
> There is nothing in the logs (i.e. sudo ls /var/log/influxdb/ gives me
> nothing) although the setting for http logging is true:
>

 systemd OSes put the logs in a new place (yay!?). See
 http://docs.influxdata.com/influxdb/v1.0/administration/logs/#systemd
 for how to read the logs.


>
> [http]
>   enabled = true
>   bind-address = ":8086"
>   auth-enabled = true
>   log-enabled = true
>
> I tried to restart influx, but got the following error:
>
> Failed to connect to http://localhost:8086
> Please check your connection settings and ensure 'influxd' is running.
>

 The `influx` console is just a fancy wrapper on the API. That error
 doesn't mean much except that the HTTP listener in InfluxDB is not yet up
 and running.


>
> Although I can see that influxd is up an running:
>
> > systemctl | grep influx
> influxdb.service
> loaded active running   InfluxDB is an open-source,
> distributed, time series database
>
> What do I do now?
>

 Check the logs as referenced above.

 The non-responsiveness on startup isn't surprising. It sounds like the
 system was overwhelmed with writes, which means that the WAL would have
 many points cached, waiting to be flushed to disk. On restart, InfluxDB
 won't accept new writes or queries until the cached ones in the WAL have
 persisted. For this reason, the HTTP listener is off until the WAL is
 flushed.


>
> I tried the same import over the weekend, then the script timeout
> happened eventually but the result was the same unresponsive, unusable
> server. We rebuilt the box and started again.
>

 It sounds like the box is just overwhelmed. Did you get backoff
 messages from the writes before the crash? What are the machine specs?



>
> Perhaps it is worthwhile mentioning that the same measurement already
> contained about 9 million records. Some of these records had the same
> timestamp as the ones I tried to import, i.e. they should have been 
> merged.
>

 Overwriting points is much much 

Re: [influxdb] Internal server error, timeout and unusable server after large imports

2016-10-12 Thread Tanya Unterberger
Hi Sean,

I can reproduce all the CPU issues, slowness, etc. if I try to import the
data that I have in milliseconds, specifying precision as milliseconds.

If I insert the same data without specifying any precision and query
without specifying any precision, the database is lightingly fast. The same
data.

The reason I was adding precision=ms is that I thought it was the right
thing to do. The manual advises that Influx sores the data in nanoseconds
but to use the lowest precision to insert. So at some stage I even used
hours, but inserted the data with precision=h. When Influx tried to convert
that data to nanoseconds, index, etc, then it was having a hissy fit.

Is it a bug or the manual should state that if you query the data at the
same precision as you insert, then you can go with the lowest precision and
do not specify what precision you are inserting?

Thanks,
Tanya

On 13 October 2016 at 10:26, Tanya Unterberger 
wrote:

> Hi Sean,
>
> The data is from 1838 to 2016, daily (sparse at times). We need to retain
> it, therefore the default policy.
>
> Thanks,
> Tanya
>
> On 13 October 2016 at 06:26, Sean Beckett  wrote:
>
>> Tanya, what range of time does your data cover? What are the retention
>> policies on the database?
>>
>> On Tue, Oct 11, 2016 at 11:14 PM, Tanya Unterberger <
>> tanya.unterber...@gmail.com> wrote:
>>
>>> Hi Sean,
>>>
>>> 1. Initially I killed the process
>>> 2. At some point I restarted influxdb service
>>> 3. Error logs show no errors
>>> 4. I rebuilt the server, installed the latest rpm. Reimported the data
>>> via scripts. Data goes in, but the server is unusable. Looks like indexing
>>> might be stuffed. The size of the data in that database is 38M. Total size
>>> of /var/lib/influxdb/data/ 273M
>>> 5. CPU went beserk and doesn't come down
>>> 6. A query like select count(blah) to the measurement that was batch
>>> inserted (10k records at a time) is unusable and times out
>>> 7. I need to import around 15 million records. How should I throttle
>>> that?
>>>
>>> At the moment I am pulling my hair out (not a pretty sight)
>>>
>>> Thanks a lot!
>>> Tanya
>>>
>>> On 12 October 2016 at 06:11, Sean Beckett  wrote:
>>>


 On Tue, Oct 11, 2016 at 12:11 AM,  wrote:

> Hi,
>
> It seems that the old issue might have surfaced again (#3349) in v1.0.
>
> I tried to insert a large number of records (3913595) via a script,
> inserting 1 rows at a time.
>
> After a while I received
>
> HTTP/1.1 500 Internal Server Error
> Content-Type: application/json
> Request-Id: ac8ebbbe-8f70-11e6-8ce7-
> X-Influxdb-Version: 1.0.0
> Date: Tue, 11 Oct 2016 05:12:02 GMT
> Content-Length: 20
>
> {"error":"timeout"}
> HTTP/1.1 100 Continue
>
> I killed the process, after which the whole box became pretty much
> unresponsive.
>

 Killed the InfluxDB process, or the batch writing script process?


>
> There is nothing in the logs (i.e. sudo ls /var/log/influxdb/ gives me
> nothing) although the setting for http logging is true:
>

 systemd OSes put the logs in a new place (yay!?). See
 http://docs.influxdata.com/influxdb/v1.0/administration/logs/#systemd
 for how to read the logs.


>
> [http]
>   enabled = true
>   bind-address = ":8086"
>   auth-enabled = true
>   log-enabled = true
>
> I tried to restart influx, but got the following error:
>
> Failed to connect to http://localhost:8086
> Please check your connection settings and ensure 'influxd' is running.
>

 The `influx` console is just a fancy wrapper on the API. That error
 doesn't mean much except that the HTTP listener in InfluxDB is not yet up
 and running.


>
> Although I can see that influxd is up an running:
>
> > systemctl | grep influx
> influxdb.service
> loaded active running   InfluxDB is an open-source,
> distributed, time series database
>
> What do I do now?
>

 Check the logs as referenced above.

 The non-responsiveness on startup isn't surprising. It sounds like the
 system was overwhelmed with writes, which means that the WAL would have
 many points cached, waiting to be flushed to disk. On restart, InfluxDB
 won't accept new writes or queries until the cached ones in the WAL have
 persisted. For this reason, the HTTP listener is off until the WAL is
 flushed.


>
> I tried the same import over the weekend, then the script timeout
> happened eventually but the result was the same unresponsive, unusable
> server. We rebuilt the box and started again.
>

 It sounds like the box is just overwhelmed. Did you get backoff
 messages from the writes before the crash? 

Re: [influxdb] Continuous queries with latency?

2016-10-12 Thread Michael C
Brilliant, thank you!  I should have been clever enough to have figured 
that out.  Silly me!

Thank you!

Mike




On Wednesday, October 12, 2016 at 12:23:58 PM UTC-7, Sean Beckett wrote:
>
> You want the FOR clause, which will cause the CQs to recalculate prior 
> intervals and pick up any newly written points in those intervals. 
> https://docs.influxdata.com/influxdb/v1.0/query_language/continuous_queries/#advanced-syntax.
>  
> E.g.
>
> CREATE CONTINUOUS QUERY foo ON db 
> RESAMPLE EVERY 20s *FOR 2m *
> BEGIN 
>   SELECT median(aa) AS aa INTO foomeas FROM barmeas GROUP BY time(20s),*  
> END
>
> Note that the GROUP BY time offset just moves the beginnings of the 
> buckets, delaying the query execution time is just a side effect. I've 
> removed it from the query.
>
> On Wed, Oct 12, 2016 at 10:43 AM, Michael C  > wrote:
>
>> I was wondering if someone out there could offer some advice on setting 
>> up a CQ for my system.  It would appear CQs might not be powerful enough to 
>> accomplish what i'm after, but im still new-ish to influx so maybe im wrong?
>>
>>
>> I have a system where multiple nodes out in the world transmit packets of 
>> data once per minute, that contain 60 samples each, 1 for each second.  The 
>> data is received and stored in influxdb.  Note that this packet of 60 
>> samples from a given node could be received at any time over the minute. 
>> Thus, from influx's perspective there could be up to 1 minute of latency 
>> before receiving the next 60 seconds of data with respect to the start of 
>> each new minute.
>>
>> With this set up, I would like to have a CQ perform a 20second median on 
>> the second by second data.
>>
>> However it would seem that the latency of my system poses a problem to CQ 
>> usage. It would seem that CQ's  need to execute within the vicinity of 
>> now() and  for my case would occur on a 20second clock boundary.  Given my 
>> setup, there would be many cases where the data simply hasnt arrived yet 
>> when the query ran due to my 1 minute latency.  I tried adding an offset in 
>> the GROUP BY time(), but this doesnt seem to work (as expected after 
>> reading the docs, 
>> https://docs.influxdata.com/influxdb/v1.0/query_language/continuous_queries/
>> )
>>
>>
>> Below is an example of one of the many queries I have tried:
>>
>> CREATE CONTINUOUS QUERY foo ON db RESAMPLE EVERY 20s BEGIN SELECT 
>> median(aa) AS aa INTO foomeas FROM barmeas GROUP BY time(20s, -60s),*  END
>>
>>
>> Is there any way to make CQs work with my system given the latency?
>>
>> I suspect it would work if I were computing 5 minute medians as I could 
>> then offset by -1m, but that is not what i'm after.  Or maybe im confused.
>>
>> Any advice would be much appreciated!
>>
>> Mike
>>
>> -- 
>> Remember to include the version number!
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "InfluxData" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to influxdb+u...@googlegroups.com .
>> To post to this group, send email to infl...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/influxdb.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/influxdb/7574ae65-fdcf-4417-b299-d72471d79ff8%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Sean Beckett
> Director of Support and Professional Services
> InfluxDB
>

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/f2867b4a-9188-4c1d-955b-fa7497408912%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Kapacitor record hung issue.

2016-10-12 Thread Sean Beckett
Perhaps Kapacitor isn't receiving any data from InfluxDB. What is the
output of `show subscriptions` from InfluxDB and `kapacitor stats ingress`
from Kapacitor?

What is the output of `kapacitor show task_name` for your task?

On Wed, Oct 12, 2016 at 5:08 PM, Nikhil Agrawal 
wrote:

> Hi,
>
> Few background.
>
> I have installed InfluxDb and Grafana and my Monitoring Project is working
> fine.
>
> Now I am trying to setup alert for all influxDb measurements, so I have
> installed Kapacitor but getting no response when I enter record command.
>
> kapacitor record stream -task cpu_alert -duration 20
>
> No response - Kapacitor is hung.
>
>
> Thanks,
>
> Nihkil
>
> --
>
> *Thanks,Nikhil Agrawal | +1 408 505 8443 <%2B1%20408%20505%208443>*
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to the Google Groups
> "InfluxData" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to influxdb+unsubscr...@googlegroups.com.
> To post to this group, send email to influxdb@googlegroups.com.
> Visit this group at https://groups.google.com/group/influxdb.
> To view this discussion on the web visit https://groups.google.com/d/ms
> gid/influxdb/CAMtKHpqs6C4_Fx5ti3kf%3D92Tf8xzDqaFRs4tnjQADVUF
> MvRCmA%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvNK%2BvdqC112KG1F6Lx2bCBr94x_hcU54hpf-OxFQJP2eA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Internal server error, timeout and unusable server after large imports

2016-10-12 Thread Tanya Unterberger
Hi Sean,

The data is from 1838 to 2016, daily (sparse at times). We need to retain
it, therefore the default policy.

Thanks,
Tanya

On 13 October 2016 at 06:26, Sean Beckett  wrote:

> Tanya, what range of time does your data cover? What are the retention
> policies on the database?
>
> On Tue, Oct 11, 2016 at 11:14 PM, Tanya Unterberger <
> tanya.unterber...@gmail.com> wrote:
>
>> Hi Sean,
>>
>> 1. Initially I killed the process
>> 2. At some point I restarted influxdb service
>> 3. Error logs show no errors
>> 4. I rebuilt the server, installed the latest rpm. Reimported the data
>> via scripts. Data goes in, but the server is unusable. Looks like indexing
>> might be stuffed. The size of the data in that database is 38M. Total size
>> of /var/lib/influxdb/data/ 273M
>> 5. CPU went beserk and doesn't come down
>> 6. A query like select count(blah) to the measurement that was batch
>> inserted (10k records at a time) is unusable and times out
>> 7. I need to import around 15 million records. How should I throttle that?
>>
>> At the moment I am pulling my hair out (not a pretty sight)
>>
>> Thanks a lot!
>> Tanya
>>
>> On 12 October 2016 at 06:11, Sean Beckett  wrote:
>>
>>>
>>>
>>> On Tue, Oct 11, 2016 at 12:11 AM,  wrote:
>>>
 Hi,

 It seems that the old issue might have surfaced again (#3349) in v1.0.

 I tried to insert a large number of records (3913595) via a script,
 inserting 1 rows at a time.

 After a while I received

 HTTP/1.1 500 Internal Server Error
 Content-Type: application/json
 Request-Id: ac8ebbbe-8f70-11e6-8ce7-
 X-Influxdb-Version: 1.0.0
 Date: Tue, 11 Oct 2016 05:12:02 GMT
 Content-Length: 20

 {"error":"timeout"}
 HTTP/1.1 100 Continue

 I killed the process, after which the whole box became pretty much
 unresponsive.

>>>
>>> Killed the InfluxDB process, or the batch writing script process?
>>>
>>>

 There is nothing in the logs (i.e. sudo ls /var/log/influxdb/ gives me
 nothing) although the setting for http logging is true:

>>>
>>> systemd OSes put the logs in a new place (yay!?). See
>>> http://docs.influxdata.com/influxdb/v1.0/administration/logs/#systemd
>>> for how to read the logs.
>>>
>>>

 [http]
   enabled = true
   bind-address = ":8086"
   auth-enabled = true
   log-enabled = true

 I tried to restart influx, but got the following error:

 Failed to connect to http://localhost:8086
 Please check your connection settings and ensure 'influxd' is running.

>>>
>>> The `influx` console is just a fancy wrapper on the API. That error
>>> doesn't mean much except that the HTTP listener in InfluxDB is not yet up
>>> and running.
>>>
>>>

 Although I can see that influxd is up an running:

 > systemctl | grep influx
 influxdb.service
   loaded active running   InfluxDB is an open-source,
 distributed, time series database

 What do I do now?

>>>
>>> Check the logs as referenced above.
>>>
>>> The non-responsiveness on startup isn't surprising. It sounds like the
>>> system was overwhelmed with writes, which means that the WAL would have
>>> many points cached, waiting to be flushed to disk. On restart, InfluxDB
>>> won't accept new writes or queries until the cached ones in the WAL have
>>> persisted. For this reason, the HTTP listener is off until the WAL is
>>> flushed.
>>>
>>>

 I tried the same import over the weekend, then the script timeout
 happened eventually but the result was the same unresponsive, unusable
 server. We rebuilt the box and started again.

>>>
>>> It sounds like the box is just overwhelmed. Did you get backoff messages
>>> from the writes before the crash? What are the machine specs?
>>>
>>>
>>>

 Perhaps it is worthwhile mentioning that the same measurement already
 contained about 9 million records. Some of these records had the same
 timestamp as the ones I tried to import, i.e. they should have been merged.

>>>
>>> Overwriting points is much much more expensive than posting new points.
>>> Each overwritten point triggers a tombstone record which must later be
>>> processed. This can trigger frequent compactions of the TSM files. With a
>>> high write load and frequent compactions, the system would encounter
>>> significant CPU pressure.
>>>
>>>

 Interestingly enough the same amount of data was fine when I forgot to
 add precision in ms, i.e. all records were imported as nanoseconds, but in
 fact they "lacked" 6 zeroes.

>>>
>>> That would mean all points are going to the same shard. It is more
>>> resource intensive to load points across a wide range of time, since more
>>> shard files are involved. InfluxDB does best with sequential
>>> chronologically ordered unique points from the very recent 

[influxdb] Kapacitor record hung issue.

2016-10-12 Thread Nikhil Agrawal
Hi,

Few background.

I have installed InfluxDb and Grafana and my Monitoring Project is working
fine.

Now I am trying to setup alert for all influxDb measurements, so I have
installed Kapacitor but getting no response when I enter record command.

kapacitor record stream -task cpu_alert -duration 20

No response - Kapacitor is hung.


Thanks,

Nihkil

-- 

*Thanks,Nikhil Agrawal | +1 408 505 8443*

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CAMtKHpqs6C4_Fx5ti3kf%3D92Tf8xzDqaFRs4tnjQADVUFMvRCmA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Not able to view new measurement after creating continuous query

2016-10-12 Thread Sean Beckett
Does

SELECT sum("value") as Sumin FROM "data" where metric = 'In_bits' GROUP BY
time(15m)

return the values you expect?

Can you share actual CLI or curl output? This seems like a syntax issue.

On Wed, Oct 12, 2016 at 3:00 PM,  wrote:

> On Wednesday, October 12, 2016 at 4:52:49 PM UTC-4, Sean Beckett wrote:
> > CQs only operate on recent data (by timestamp, not write time). Are you
> continually inserting new values into "data"?
> >
> >
> > On Wed, Oct 12, 2016 at 1:56 PM,   wrote:
> > On Wednesday, October 12, 2016 at 3:41:20 PM UTC-4, Sean Beckett wrote:
> >
> > > To clarify, SELECT * FROM data.copy expands out to "fetch all fields
> from the retention policy named 'data', and the measurement named 'copy'."
> InfluxDB supports [.[.]] in the
> FROM clause, so if there's a period, it is separating the DB or RP from the
> measurement.
> >
> > >
> >
> > >
> >
> > > Using double-quotes forces the query interpreter to parse that as a
> single identifier, and thus just the measurement name.
> >
> > >
> >
> > >
> >
> > > On Wed, Oct 12, 2016 at 1:29 PM, Sean Beckett 
> wrote:
> >
> > >
> >
> > > > After creating this, how do I see the results saved in new
> measurement data.copy?
> >
> > >
> >
> > >
> >
> > > SELECT * FROM "data.copy"
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > > On Wed, Oct 12, 2016 at 8:20 AM,   wrote:
> >
> > > I have a measurement that has the following fields in InfluxDB. The
> measurement name is data.
> >
> > >
> >
> > >
> >
> > >
> >
> > > Measurement name: data
> >
> > >
> >
> > > Time  Device  Interface
> MetricValue
> >
> > >
> >
> > > 2016-10-11T19:00:00Zdevice1_nameint_nameIn_bits10
> >
> > >
> >
> > > 2016-10-11T19:00:00Zdevice2_nameint_nameIn_bits 5
> >
> > >
> >
> > > .
> >
> > >
> >
> > > .
> >
> > >
> >
> > > .
> >
> > >
> >
> > >
> >
> > >
> >
> > > I have created a continuous query as follows:
> >
> > >
> >
> > >
> >
> > >
> >
> > > CREATE CONTINUOUS QUERY "test_query" ON "db_name" BEGIN SELECT
> sum("value") as Sumin INTO "data.copy" FROM "data" where metric = 'In_bits'
> GROUP BY time(15m), device END
> >
> > >
> >
> > >
> >
> > >
> >
> > > After creating this, how do I see the results saved in new measurement
> data.copy? After creating this query, I am not able to find the new
> measurement that has been created. If I'm doing something wrong, I would
> like to get more input on this matter. Thanks!
> >
> > >
> >
> > >
> >
> > >
> >
> > > --
> >
> > >
> >
> > > Remember to include the version number!
> >
> > >
> >
> > > ---
> >
> > >
> >
> > > You received this message because you are subscribed to the Google
> Groups "InfluxData" group.
> >
> > >
> >
> > > To unsubscribe from this group and stop receiving emails from it, send
> an email to influxdb+u...@googlegroups.com.
> >
> > >
> >
> > > To post to this group, send email to infl...@googlegroups.com.
> >
> > >
> >
> > > Visit this group at https://groups.google.com/group/influxdb.
> >
> > >
> >
> > > To view this discussion on the web visit https://groups.google.com/d/
> msgid/influxdb/8dd70a88-f5ae-43ed-8f21-083b1aedff7d%40googlegroups.com.
> >
> > >
> >
> > > For more options, visit https://groups.google.com/d/optout.
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > > --
> >
> > >
> >
> > >
> >
> > > Sean Beckett
> >
> > > Director of Support and Professional Services
> >
> > > InfluxDB
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > > --
> >
> > >
> >
> > >
> >
> > > Sean Beckett
> >
> > > Director of Support and Professional Services
> >
> > > InfluxDB
> >
> >
> >
> > Hi,
> >
> > Thank you for that input. The version number is 1.0.0
> >
> > And I have modified the continuous query to put the results into a new
> measurement - "data_copy"
> >
> > If I do a "select * from data_copy" it shows - "success! (no results to
> display)"
> >
> > If I do ""show measurements" I am unable to see "data_copy"
> >
> > If I just run the select statement, it does return results. However, for
> the continuous query, it doesn't put it into a new measurement which is
> what I want it to do.
> >
> >
> >
> > --
> >
> > Remember to include the version number!
> >
> > ---
> >
> > You received this message because you are subscribed to the Google
> Groups "InfluxData" group.
> >
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to influxdb+u...@googlegroups.com.
> >
> > To post to this group, send email to infl...@googlegroups.com.
> >
> > Visit this group at https://groups.google.com/group/influxdb.
> >
> > To view this discussion on the web visit https://groups.google.com/d/
> msgid/influxdb/e58f7c72-f7c2-4a01-a760-520ec13f9e98%40googlegroups.com.
> >
> >
> >
> > For more options, visit https://groups.google.com/d/optout.
> >
> >
> >
> >
> >
> > --
> >
> >
> > Sean Beckett
> > Director of Support and 

Re: [influxdb] Not able to view new measurement after creating continuous query

2016-10-12 Thread tracyann . monteiro
On Wednesday, October 12, 2016 at 4:52:49 PM UTC-4, Sean Beckett wrote:
> CQs only operate on recent data (by timestamp, not write time). Are you 
> continually inserting new values into "data"?
> 
> 
> On Wed, Oct 12, 2016 at 1:56 PM,   wrote:
> On Wednesday, October 12, 2016 at 3:41:20 PM UTC-4, Sean Beckett wrote:
> 
> > To clarify, SELECT * FROM data.copy expands out to "fetch all fields from 
> > the retention policy named 'data', and the measurement named 'copy'." 
> > InfluxDB supports [.[.]] in the 
> > FROM clause, so if there's a period, it is separating the DB or RP from the 
> > measurement.
> 
> >
> 
> >
> 
> > Using double-quotes forces the query interpreter to parse that as a single 
> > identifier, and thus just the measurement name.
> 
> >
> 
> >
> 
> > On Wed, Oct 12, 2016 at 1:29 PM, Sean Beckett  wrote:
> 
> >
> 
> > > After creating this, how do I see the results saved in new measurement 
> > >data.copy? 
> 
> >
> 
> >
> 
> > SELECT * FROM "data.copy"
> 
> >
> 
> >
> 
> >
> 
> >
> 
> > On Wed, Oct 12, 2016 at 8:20 AM,   wrote:
> 
> > I have a measurement that has the following fields in InfluxDB. The 
> > measurement name is data.
> 
> >
> 
> >
> 
> >
> 
> > Measurement name: data
> 
> >
> 
> > Time                                  Device              Interface    
> > Metric    Value
> 
> >
> 
> > 2016-10-11T19:00:00Z    device1_name    int_name    In_bits    10
> 
> >
> 
> > 2016-10-11T19:00:00Z    device2_name    int_name    In_bits     5
> 
> >
> 
> > .
> 
> >
> 
> > .
> 
> >
> 
> > .
> 
> >
> 
> >
> 
> >
> 
> > I have created a continuous query as follows:
> 
> >
> 
> >
> 
> >
> 
> > CREATE CONTINUOUS QUERY "test_query" ON "db_name" BEGIN SELECT sum("value") 
> > as Sumin INTO "data.copy" FROM "data" where metric = 'In_bits' GROUP BY 
> > time(15m), device END
> 
> >
> 
> >
> 
> >
> 
> > After creating this, how do I see the results saved in new measurement 
> > data.copy? After creating this query, I am not able to find the new 
> > measurement that has been created. If I'm doing something wrong, I would 
> > like to get more input on this matter. Thanks!
> 
> >
> 
> >
> 
> >
> 
> > --
> 
> >
> 
> > Remember to include the version number!
> 
> >
> 
> > ---
> 
> >
> 
> > You received this message because you are subscribed to the Google Groups 
> > "InfluxData" group.
> 
> >
> 
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to influxdb+u...@googlegroups.com.
> 
> >
> 
> > To post to this group, send email to infl...@googlegroups.com.
> 
> >
> 
> > Visit this group at https://groups.google.com/group/influxdb.
> 
> >
> 
> > To view this discussion on the web visit 
> > https://groups.google.com/d/msgid/influxdb/8dd70a88-f5ae-43ed-8f21-083b1aedff7d%40googlegroups.com.
> 
> >
> 
> > For more options, visit https://groups.google.com/d/optout.
> 
> >
> 
> >
> 
> >
> 
> >
> 
> >
> 
> > --
> 
> >
> 
> >
> 
> > Sean Beckett
> 
> > Director of Support and Professional Services
> 
> > InfluxDB
> 
> >
> 
> >
> 
> >
> 
> >
> 
> >
> 
> > --
> 
> >
> 
> >
> 
> > Sean Beckett
> 
> > Director of Support and Professional Services
> 
> > InfluxDB
> 
> 
> 
> Hi,
> 
> Thank you for that input. The version number is 1.0.0
> 
> And I have modified the continuous query to put the results into a new 
> measurement - "data_copy"
> 
> If I do a "select * from data_copy" it shows - "success! (no results to 
> display)"
> 
> If I do ""show measurements" I am unable to see "data_copy"
> 
> If I just run the select statement, it does return results. However, for the 
> continuous query, it doesn't put it into a new measurement which is what I 
> want it to do.
> 
> 
> 
> --
> 
> Remember to include the version number!
> 
> ---
> 
> You received this message because you are subscribed to the Google Groups 
> "InfluxData" group.
> 
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to influxdb+u...@googlegroups.com.
> 
> To post to this group, send email to infl...@googlegroups.com.
> 
> Visit this group at https://groups.google.com/group/influxdb.
> 
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/influxdb/e58f7c72-f7c2-4a01-a760-520ec13f9e98%40googlegroups.com.
> 
> 
> 
> For more options, visit https://groups.google.com/d/optout.
> 
> 
> 
> 
> 
> -- 
> 
> 
> Sean Beckett
> Director of Support and Professional Services
> InfluxDB

Yes, I have a cron job that runs the python script that posts new data every 5 
minutes. 

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web 

Re: [influxdb] Not able to view new measurement after creating continuous query

2016-10-12 Thread Sean Beckett
CQs only operate on recent data (by timestamp, not write time). Are you
continually inserting new values into "data"?

On Wed, Oct 12, 2016 at 1:56 PM,  wrote:

> On Wednesday, October 12, 2016 at 3:41:20 PM UTC-4, Sean Beckett wrote:
> > To clarify, SELECT * FROM data.copy expands out to "fetch all fields
> from the retention policy named 'data', and the measurement named 'copy'."
> InfluxDB supports [.[.]] in the
> FROM clause, so if there's a period, it is separating the DB or RP from the
> measurement.
> >
> >
> > Using double-quotes forces the query interpreter to parse that as a
> single identifier, and thus just the measurement name.
> >
> >
> > On Wed, Oct 12, 2016 at 1:29 PM, Sean Beckett 
> wrote:
> >
> > > After creating this, how do I see the results saved in new measurement
> data.copy?
> >
> >
> > SELECT * FROM "data.copy"
> >
> >
> >
> >
> > On Wed, Oct 12, 2016 at 8:20 AM,   wrote:
> > I have a measurement that has the following fields in InfluxDB. The
> measurement name is data.
> >
> >
> >
> > Measurement name: data
> >
> > Time  Device  Interface
> MetricValue
> >
> > 2016-10-11T19:00:00Zdevice1_nameint_nameIn_bits10
> >
> > 2016-10-11T19:00:00Zdevice2_nameint_nameIn_bits 5
> >
> > .
> >
> > .
> >
> > .
> >
> >
> >
> > I have created a continuous query as follows:
> >
> >
> >
> > CREATE CONTINUOUS QUERY "test_query" ON "db_name" BEGIN SELECT
> sum("value") as Sumin INTO "data.copy" FROM "data" where metric = 'In_bits'
> GROUP BY time(15m), device END
> >
> >
> >
> > After creating this, how do I see the results saved in new measurement
> data.copy? After creating this query, I am not able to find the new
> measurement that has been created. If I'm doing something wrong, I would
> like to get more input on this matter. Thanks!
> >
> >
> >
> > --
> >
> > Remember to include the version number!
> >
> > ---
> >
> > You received this message because you are subscribed to the Google
> Groups "InfluxData" group.
> >
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to influxdb+u...@googlegroups.com.
> >
> > To post to this group, send email to infl...@googlegroups.com.
> >
> > Visit this group at https://groups.google.com/group/influxdb.
> >
> > To view this discussion on the web visit https://groups.google.com/d/
> msgid/influxdb/8dd70a88-f5ae-43ed-8f21-083b1aedff7d%40googlegroups.com.
> >
> > For more options, visit https://groups.google.com/d/optout.
> >
> >
> >
> >
> >
> > --
> >
> >
> > Sean Beckett
> > Director of Support and Professional Services
> > InfluxDB
> >
> >
> >
> >
> >
> > --
> >
> >
> > Sean Beckett
> > Director of Support and Professional Services
> > InfluxDB
>
> Hi,
> Thank you for that input. The version number is 1.0.0
> And I have modified the continuous query to put the results into a new
> measurement - "data_copy"
> If I do a "select * from data_copy" it shows - "success! (no results to
> display)"
> If I do ""show measurements" I am unable to see "data_copy"
> If I just run the select statement, it does return results. However, for
> the continuous query, it doesn't put it into a new measurement which is
> what I want it to do.
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to the Google Groups
> "InfluxData" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to influxdb+unsubscr...@googlegroups.com.
> To post to this group, send email to influxdb@googlegroups.com.
> Visit this group at https://groups.google.com/group/influxdb.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/influxdb/e58f7c72-f7c2-4a01-a760-520ec13f9e98%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvN1RGnbUaggk8AafV7GBmveXcXH4ZYZ6Gy4-baffk3PRw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Not able to view new measurement after creating continuous query

2016-10-12 Thread tracyann . monteiro
On Wednesday, October 12, 2016 at 3:41:20 PM UTC-4, Sean Beckett wrote:
> To clarify, SELECT * FROM data.copy expands out to "fetch all fields from the 
> retention policy named 'data', and the measurement named 'copy'." InfluxDB 
> supports [.[.]] in the FROM clause, 
> so if there's a period, it is separating the DB or RP from the measurement.
> 
> 
> Using double-quotes forces the query interpreter to parse that as a single 
> identifier, and thus just the measurement name.
> 
> 
> On Wed, Oct 12, 2016 at 1:29 PM, Sean Beckett  wrote:
> 
> > After creating this, how do I see the results saved in new measurement 
> >data.copy? 
> 
> 
> SELECT * FROM "data.copy"
> 
> 
> 
> 
> On Wed, Oct 12, 2016 at 8:20 AM,   wrote:
> I have a measurement that has the following fields in InfluxDB. The 
> measurement name is data.
> 
> 
> 
> Measurement name: data
> 
> Time                                  Device              Interface    Metric 
>    Value
> 
> 2016-10-11T19:00:00Z    device1_name    int_name    In_bits    10
> 
> 2016-10-11T19:00:00Z    device2_name    int_name    In_bits     5
> 
> .
> 
> .
> 
> .
> 
> 
> 
> I have created a continuous query as follows:
> 
> 
> 
> CREATE CONTINUOUS QUERY "test_query" ON "db_name" BEGIN SELECT sum("value") 
> as Sumin INTO "data.copy" FROM "data" where metric = 'In_bits' GROUP BY 
> time(15m), device END
> 
> 
> 
> After creating this, how do I see the results saved in new measurement 
> data.copy? After creating this query, I am not able to find the new 
> measurement that has been created. If I'm doing something wrong, I would like 
> to get more input on this matter. Thanks!
> 
> 
> 
> --
> 
> Remember to include the version number!
> 
> ---
> 
> You received this message because you are subscribed to the Google Groups 
> "InfluxData" group.
> 
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to influxdb+u...@googlegroups.com.
> 
> To post to this group, send email to infl...@googlegroups.com.
> 
> Visit this group at https://groups.google.com/group/influxdb.
> 
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/influxdb/8dd70a88-f5ae-43ed-8f21-083b1aedff7d%40googlegroups.com.
> 
> For more options, visit https://groups.google.com/d/optout.
> 
> 
> 
> 
> 
> -- 
> 
> 
> Sean Beckett
> Director of Support and Professional Services
> InfluxDB
> 
> 
> 
> 
> 
> -- 
> 
> 
> Sean Beckett
> Director of Support and Professional Services
> InfluxDB

Hi,
Thank you for that input. The version number is 1.0.0 
And I have modified the continuous query to put the results into a new 
measurement - "data_copy"
If I do a "select * from data_copy" it shows - "success! (no results to 
display)"
If I do ""show measurements" I am unable to see "data_copy"
If I just run the select statement, it does return results. However, for the 
continuous query, it doesn't put it into a new measurement which is what I want 
it to do. 

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/e58f7c72-f7c2-4a01-a760-520ec13f9e98%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Not able to view new measurement after creating continuous query

2016-10-12 Thread Sean Beckett
To clarify, SELECT * FROM data.copy expands out to "fetch all fields from
the retention policy named 'data', and the measurement named 'copy'."
InfluxDB supports [.[.]] in the
FROM clause, so if there's a period, it is separating the DB or RP from the
measurement.

Using double-quotes forces the query interpreter to parse that as a single
identifier, and thus just the measurement name.

On Wed, Oct 12, 2016 at 1:29 PM, Sean Beckett  wrote:

> > After creating this, how do I see the results saved in new measurement
> data.copy?
>
> SELECT * FROM "data.copy"
>
> On Wed, Oct 12, 2016 at 8:20 AM,  wrote:
>
>> I have a measurement that has the following fields in InfluxDB. The
>> measurement name is data.
>>
>> Measurement name: data
>> Time  Device  Interface
>> MetricValue
>> 2016-10-11T19:00:00Zdevice1_nameint_nameIn_bits10
>> 2016-10-11T19:00:00Zdevice2_nameint_nameIn_bits 5
>> .
>> .
>> .
>>
>> I have created a continuous query as follows:
>>
>> CREATE CONTINUOUS QUERY "test_query" ON "db_name" BEGIN SELECT
>> sum("value") as Sumin INTO "data.copy" FROM "data" where metric = 'In_bits'
>> GROUP BY time(15m), device END
>>
>> After creating this, how do I see the results saved in new measurement
>> data.copy? After creating this query, I am not able to find the new
>> measurement that has been created. If I'm doing something wrong, I would
>> like to get more input on this matter. Thanks!
>>
>> --
>> Remember to include the version number!
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "InfluxData" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to influxdb+unsubscr...@googlegroups.com.
>> To post to this group, send email to influxdb@googlegroups.com.
>> Visit this group at https://groups.google.com/group/influxdb.
>> To view this discussion on the web visit https://groups.google.com/d/ms
>> gid/influxdb/8dd70a88-f5ae-43ed-8f21-083b1aedff7d%40googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Sean Beckett
> Director of Support and Professional Services
> InfluxDB
>



-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvOQN-ym%3D0YUfnE2fw9B-JQNTLR2ZZa2z0Gr0eyEH3%3DBQw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Not able to view new measurement after creating continuous query

2016-10-12 Thread Sean Beckett
> After creating this, how do I see the results saved in new measurement
data.copy?

SELECT * FROM "data.copy"

On Wed, Oct 12, 2016 at 8:20 AM,  wrote:

> I have a measurement that has the following fields in InfluxDB. The
> measurement name is data.
>
> Measurement name: data
> Time  Device  Interface
> MetricValue
> 2016-10-11T19:00:00Zdevice1_nameint_nameIn_bits10
> 2016-10-11T19:00:00Zdevice2_nameint_nameIn_bits 5
> .
> .
> .
>
> I have created a continuous query as follows:
>
> CREATE CONTINUOUS QUERY "test_query" ON "db_name" BEGIN SELECT
> sum("value") as Sumin INTO "data.copy" FROM "data" where metric = 'In_bits'
> GROUP BY time(15m), device END
>
> After creating this, how do I see the results saved in new measurement
> data.copy? After creating this query, I am not able to find the new
> measurement that has been created. If I'm doing something wrong, I would
> like to get more input on this matter. Thanks!
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to the Google Groups
> "InfluxData" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to influxdb+unsubscr...@googlegroups.com.
> To post to this group, send email to influxdb@googlegroups.com.
> Visit this group at https://groups.google.com/group/influxdb.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/influxdb/8dd70a88-f5ae-43ed-8f21-083b1aedff7d%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvNNUQ0HFepqkV1fU53m3JSSRgk5YeC5mrHuhkR2Qbitmg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Internal server error, timeout and unusable server after large imports

2016-10-12 Thread Sean Beckett
Tanya, what range of time does your data cover? What are the retention
policies on the database?

On Tue, Oct 11, 2016 at 11:14 PM, Tanya Unterberger <
tanya.unterber...@gmail.com> wrote:

> Hi Sean,
>
> 1. Initially I killed the process
> 2. At some point I restarted influxdb service
> 3. Error logs show no errors
> 4. I rebuilt the server, installed the latest rpm. Reimported the data via
> scripts. Data goes in, but the server is unusable. Looks like indexing
> might be stuffed. The size of the data in that database is 38M. Total size
> of /var/lib/influxdb/data/ 273M
> 5. CPU went beserk and doesn't come down
> 6. A query like select count(blah) to the measurement that was batch
> inserted (10k records at a time) is unusable and times out
> 7. I need to import around 15 million records. How should I throttle that?
>
> At the moment I am pulling my hair out (not a pretty sight)
>
> Thanks a lot!
> Tanya
>
> On 12 October 2016 at 06:11, Sean Beckett  wrote:
>
>>
>>
>> On Tue, Oct 11, 2016 at 12:11 AM,  wrote:
>>
>>> Hi,
>>>
>>> It seems that the old issue might have surfaced again (#3349) in v1.0.
>>>
>>> I tried to insert a large number of records (3913595) via a script,
>>> inserting 1 rows at a time.
>>>
>>> After a while I received
>>>
>>> HTTP/1.1 500 Internal Server Error
>>> Content-Type: application/json
>>> Request-Id: ac8ebbbe-8f70-11e6-8ce7-
>>> X-Influxdb-Version: 1.0.0
>>> Date: Tue, 11 Oct 2016 05:12:02 GMT
>>> Content-Length: 20
>>>
>>> {"error":"timeout"}
>>> HTTP/1.1 100 Continue
>>>
>>> I killed the process, after which the whole box became pretty much
>>> unresponsive.
>>>
>>
>> Killed the InfluxDB process, or the batch writing script process?
>>
>>
>>>
>>> There is nothing in the logs (i.e. sudo ls /var/log/influxdb/ gives me
>>> nothing) although the setting for http logging is true:
>>>
>>
>> systemd OSes put the logs in a new place (yay!?). See
>> http://docs.influxdata.com/influxdb/v1.0/administration/logs/#systemd
>> for how to read the logs.
>>
>>
>>>
>>> [http]
>>>   enabled = true
>>>   bind-address = ":8086"
>>>   auth-enabled = true
>>>   log-enabled = true
>>>
>>> I tried to restart influx, but got the following error:
>>>
>>> Failed to connect to http://localhost:8086
>>> Please check your connection settings and ensure 'influxd' is running.
>>>
>>
>> The `influx` console is just a fancy wrapper on the API. That error
>> doesn't mean much except that the HTTP listener in InfluxDB is not yet up
>> and running.
>>
>>
>>>
>>> Although I can see that influxd is up an running:
>>>
>>> > systemctl | grep influx
>>> influxdb.service
>>>   loaded active running   InfluxDB is an open-source,
>>> distributed, time series database
>>>
>>> What do I do now?
>>>
>>
>> Check the logs as referenced above.
>>
>> The non-responsiveness on startup isn't surprising. It sounds like the
>> system was overwhelmed with writes, which means that the WAL would have
>> many points cached, waiting to be flushed to disk. On restart, InfluxDB
>> won't accept new writes or queries until the cached ones in the WAL have
>> persisted. For this reason, the HTTP listener is off until the WAL is
>> flushed.
>>
>>
>>>
>>> I tried the same import over the weekend, then the script timeout
>>> happened eventually but the result was the same unresponsive, unusable
>>> server. We rebuilt the box and started again.
>>>
>>
>> It sounds like the box is just overwhelmed. Did you get backoff messages
>> from the writes before the crash? What are the machine specs?
>>
>>
>>
>>>
>>> Perhaps it is worthwhile mentioning that the same measurement already
>>> contained about 9 million records. Some of these records had the same
>>> timestamp as the ones I tried to import, i.e. they should have been merged.
>>>
>>
>> Overwriting points is much much more expensive than posting new points.
>> Each overwritten point triggers a tombstone record which must later be
>> processed. This can trigger frequent compactions of the TSM files. With a
>> high write load and frequent compactions, the system would encounter
>> significant CPU pressure.
>>
>>
>>>
>>> Interestingly enough the same amount of data was fine when I forgot to
>>> add precision in ms, i.e. all records were imported as nanoseconds, but in
>>> fact they "lacked" 6 zeroes.
>>>
>>
>> That would mean all points are going to the same shard. It is more
>> resource intensive to load points across a wide range of time, since more
>> shard files are involved. InfluxDB does best with sequential
>> chronologically ordered unique points from the very recent past. The more
>> the write operation differs from that, the lower the throughput.
>>
>>
>>>
>>> Please advise what kind of action I can take.
>>>
>>
>> Look in the logs for errors. Throttle the writes. Don't overwrite more
>> points than you have to.
>>
>>
>>>
>>> Thanks a lot!
>>> Tanya
>>>
>>> --
>>> Remember to include the InfluxDB version 

Re: [influxdb] There was an error writing history file: open : The system cannot find the file specified.

2016-10-12 Thread Sean Beckett
Please open an issue
 on the Docs
repro for the installation update, thanks!

On Wed, Oct 12, 2016 at 10:17 AM,  wrote:

> Got it to work.  The command line don't work for me.  I tried on 2
> different PCs.  However, works in GUI way by go to System Properties - >
> Environment Variables - > Add an User Variables for my profile, name =
> HOME   value = E:\influxdb-1.0.1_windows_amd64\influxdb-1.0.1-1  (or
> whatever your path to unzip folder name).  Required to restart.  Then works
> after that.
>
> Sean, if you update your website so other won't run into the same thing
> would be great.  Thanks all.
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to the Google Groups
> "InfluxData" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to influxdb+unsubscr...@googlegroups.com.
> To post to this group, send email to influxdb@googlegroups.com.
> Visit this group at https://groups.google.com/group/influxdb.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/influxdb/98db3e94-ca79-44b6-9da1-8617611ffa81%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvOudXGAgPAhQ5D7UiT-iU-YVd3bZ8Yw_Rk7hnAdEhcCsw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Continuous queries with latency?

2016-10-12 Thread Sean Beckett
You want the FOR clause, which will cause the CQs to recalculate prior
intervals and pick up any newly written points in those intervals.
https://docs.influxdata.com/influxdb/v1.0/query_language/continuous_queries/#advanced-syntax.
E.g.

CREATE CONTINUOUS QUERY foo ON db
RESAMPLE EVERY 20s *FOR 2m *
BEGIN
  SELECT median(aa) AS aa INTO foomeas FROM barmeas GROUP BY time(20s),*
END

Note that the GROUP BY time offset just moves the beginnings of the
buckets, delaying the query execution time is just a side effect. I've
removed it from the query.

On Wed, Oct 12, 2016 at 10:43 AM, Michael C  wrote:

> I was wondering if someone out there could offer some advice on setting up
> a CQ for my system.  It would appear CQs might not be powerful enough to
> accomplish what i'm after, but im still new-ish to influx so maybe im wrong?
>
>
> I have a system where multiple nodes out in the world transmit packets of
> data once per minute, that contain 60 samples each, 1 for each second.  The
> data is received and stored in influxdb.  Note that this packet of 60
> samples from a given node could be received at any time over the minute.
> Thus, from influx's perspective there could be up to 1 minute of latency
> before receiving the next 60 seconds of data with respect to the start of
> each new minute.
>
> With this set up, I would like to have a CQ perform a 20second median on
> the second by second data.
>
> However it would seem that the latency of my system poses a problem to CQ
> usage. It would seem that CQ's  need to execute within the vicinity of
> now() and  for my case would occur on a 20second clock boundary.  Given my
> setup, there would be many cases where the data simply hasnt arrived yet
> when the query ran due to my 1 minute latency.  I tried adding an offset in
> the GROUP BY time(), but this doesnt seem to work (as expected after
> reading the docs, https://docs.influxdata.com/
> influxdb/v1.0/query_language/continuous_queries/)
>
>
> Below is an example of one of the many queries I have tried:
>
> CREATE CONTINUOUS QUERY foo ON db RESAMPLE EVERY 20s BEGIN SELECT
> median(aa) AS aa INTO foomeas FROM barmeas GROUP BY time(20s, -60s),*  END
>
>
> Is there any way to make CQs work with my system given the latency?
>
> I suspect it would work if I were computing 5 minute medians as I could
> then offset by -1m, but that is not what i'm after.  Or maybe im confused.
>
> Any advice would be much appreciated!
>
> Mike
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to the Google Groups
> "InfluxData" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to influxdb+unsubscr...@googlegroups.com.
> To post to this group, send email to influxdb@googlegroups.com.
> Visit this group at https://groups.google.com/group/influxdb.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/influxdb/7574ae65-fdcf-4417-b299-d72471d79ff8%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvNP2uSHa1phrOpQKxojTWYcoTs7cq8d4RD9-a9dKRjV0A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Continuous queries with latency?

2016-10-12 Thread Michael C
I was wondering if someone out there could offer some advice on setting up 
a CQ for my system.  It would appear CQs might not be powerful enough to 
accomplish what i'm after, but im still new-ish to influx so maybe im wrong?


I have a system where multiple nodes out in the world transmit packets of 
data once per minute, that contain 60 samples each, 1 for each second.  The 
data is received and stored in influxdb.  Note that this packet of 60 
samples from a given node could be received at any time over the minute. 
Thus, from influx's perspective there could be up to 1 minute of latency 
before receiving the next 60 seconds of data with respect to the start of 
each new minute.

With this set up, I would like to have a CQ perform a 20second median on 
the second by second data.

However it would seem that the latency of my system poses a problem to CQ 
usage. It would seem that CQ's  need to execute within the vicinity of 
now() and  for my case would occur on a 20second clock boundary.  Given my 
setup, there would be many cases where the data simply hasnt arrived yet 
when the query ran due to my 1 minute latency.  I tried adding an offset in 
the GROUP BY time(), but this doesnt seem to work (as expected after 
reading the docs, 
https://docs.influxdata.com/influxdb/v1.0/query_language/continuous_queries/)


Below is an example of one of the many queries I have tried:

CREATE CONTINUOUS QUERY foo ON db RESAMPLE EVERY 20s BEGIN SELECT 
median(aa) AS aa INTO foomeas FROM barmeas GROUP BY time(20s, -60s),*  END


Is there any way to make CQs work with my system given the latency?

I suspect it would work if I were computing 5 minute medians as I could 
then offset by -1m, but that is not what i'm after.  Or maybe im confused.

Any advice would be much appreciated!

Mike

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/7574ae65-fdcf-4417-b299-d72471d79ff8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Cannot find measurement after using INTO clause

2016-10-12 Thread Sean Beckett
This sounds like a syntax issue. Please give the actual queries you are
running as well as any output.

On Tue, Oct 11, 2016 at 3:08 PM,  wrote:

> I want to record my select statement by saving it into another
> measurement. My query is as listed below:
>
> select sum(value) as Sum into new_measurement from old_measurement where
> tagkey1 = 'tagValue1' group by tagKey2
>
> After I run this query, it says that it has written the records but when I
> query the DB for that measurement, it doesn't show up. Am I doing something
> wrong? If so, how can I achieve saving the records of my select statement?
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to the Google Groups
> "InfluxData" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to influxdb+unsubscr...@googlegroups.com.
> To post to this group, send email to influxdb@googlegroups.com.
> Visit this group at https://groups.google.com/group/influxdb.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/influxdb/e1205093-9272-4239-8e5b-f855aa1bdcfd%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvNz5eHav0A2RHrnhEsrU4Vg0H62G6tV5tNuPAGwVENsmQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Re: Kapacitor and external InfluxDB.

2016-10-12 Thread nathaniel
What is the output of `kapacitor show task_name` for your task?

On Wednesday, October 12, 2016 at 9:49:44 AM UTC-6, codep...@gmail.com 
wrote:
>
> And kapacitor list recordings produces:
> ID  TypeStatusSize  Date   
> 
> f6dc1d97-6bba-4beb-9ba5-29e52c10cd4bstream  finished  23 B  12 Oct 
> 16 18:48 MSK
> c442a78a-9267-448e-a21a-5073895b2f42stream  finished  23 B  12 Oct 
> 16 18:37 MSK
> 336fef32-e386-4b33-8b6f-338e47482176stream  finished  23 B  12 Oct 
> 16 18:33 MSK
> b18d463a-7aff-491f-8fee-308cc5ea8b11stream  finished  23 B  12 Oct 
> 16 18:32 MSK
> b7e933e9-41dd-41ac-bfb4-082bb3fa4f29stream  finished  23 B  12 Oct 
> 16 15:37 MSK 

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/b90c06c8-3b07-4506-a076-a591e0b3cc46%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] There was an error writing history file: open : The system cannot find the file specified.

2016-10-12 Thread everyonelovesdance
I did it on 2 different computers.  Go to DOS prompt, run that command, start 
up again, same error, restart computer, same error.  If I ran into this, does 
Sean from Influxdb can repro that?  

> show databases
name: databases
---
name
_internal

There was an error writing history file: open : The system cannot find the file
specified.
>

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/54b86172-3ab7-4c89-ac3a-734e388df218%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Re: Kapacitor and external InfluxDB.

2016-10-12 Thread codeprojtwo
Hmm - now it no longer freezes, but doesn't record anything either. 

Current script is:


stream
// Select just the cpu measurement from our example database.
|from()
.database('cp')
.measurement('user_cp')
|alert()
// Whenever we get an alert write it to a file.
.info(lambda: "value" > 500.0)
.log('/tmp/alerts.log')



:~$ kapacitor stats ingress
Database  Retention Policy  Measurement 
  Points Received 
cpdefault   fm_cp   
  2426
_internal monitor   tsm1_cache  
  35092   
_kapacitorautogen   kapacitor   
  1132
_internal monitor   udp 
  1132
cpdefault   user_cp 
  1710771 
_kapacitorautogen   ingress 
  21504   
_kapacitorautogen   edges   
  2266
_internal monitor   subscriber  
  6792
_internal monitor   database
  5660
_internal monitor   httpd   
  1132
_internal monitor   shard   
  35092   
_internal monitor   tsm1_filestore  
  35092   
_internal monitor   queryExecutor   
  1132
_kapacitorautogen   runtime 
  1132
_internal monitor   cq  
  1132
_internal monitor   write   
  1132
_internal monitor   tsm1_wal
  35092   
_internal monitor   tsm1_engine 
  35092   
_internal monitor   runtime 
  1132

and SHOW SUBSCRIPTIONS produces:

{
"results": [{
"series": [{
"name": "cp",
"columns": ["retention_policy", "name", "mode", "destinations"],
"values": [
["default", "kapacitor-6ad61546-9c23-4936-83b8-cce438e2de91", 
"ANY", ["http://vvm-dev35.internal:9092;]]
]
}, {
"name": "_internal",
"columns": ["retention_policy", "name", "mode", "destinations"],
"values": [
["monitor", "kapacitor-6ad61546-9c23-4936-83b8-cce438e2de91", 
"ANY", ["http://vm-dev35.internal:9092;]]
]
}, {
"name": "cp-test",
"columns": ["retention_policy", "name", "mode", "destinations"],
"values": [
["default", "kapacitor-6ad61546-9c23-4936-83b8-cce438e2de91", 
"ANY", ["http://vm-dev35.internal:9092;]]
]
}, {
"name": "cp-tst",
"columns": ["retention_policy", "name", "mode", "destinations"],
"values": [
["default", "kapacitor-6ad61546-9c23-4936-83b8-cce438e2de91", 
"ANY", ["http://vm-dev35.internal:9092;]]
]
}, {
"name": "everest4",
"columns": ["retention_policy", "name", "mode", "destinations"],
"values": [
["autogen", "kapacitor-6ad61546-9c23-4936-83b8-cce438e2de91", 
"ANY", ["http://vm-dev35.internal:9092;]]
]
}]
}]
}


> Kapacitor isn't receiving any data from InfluxDB. What is the output of
> 
> 
> show subscriptions
> 
> 
> and 
> 
> 
> kapacitor stats ingress
> 
> 
> 
> 
> On Tuesday, October 11, 2016 at 11:29:26 AM UTC-6, codep...@gmail.com 
> wrote:I'm having a hard time setting up a record. Currently kapacitor.conf 
> contains:
> [[influxdb]]
>   enabled = true
>   name = "cp"
>   default = true
>   urls = ["http://influx-cp.internal:8086;]
>   username = "root"
>   password = "root"
>   ssl-ca = ""
>   ssl-cert = ""
>   ssl-key = ""
>   insecure-skip-verify = false
>   timeout = "0"
>   disable-subscriptions = false
>   subscription-protocol = "http"
>   kapacitor-hostname = ""
>   http-port = 0
>   udp-bind = ""
>   udp-buffer = 1000
>   udp-read-buffer = 0
>   startup-timeout = "5m0s"
>   subscriptions-sync-interval = "1m0s"
>   [influxdb.subscriptions]
>   

[influxdb] Reg usage of telegraf for Docker Stats within Container

2016-10-12 Thread pckeyan
Hi,

I am using Telegraf in my current production environment to read the jolokia 
for JVM Stats and saves it in Influxdb. In the same way I would like to use the 
same telegraf plugin to post the Docker Stats to InfluxDb. We are using docker 
to deploy our APIs. Telegraf is installed within the Docker Container. I prefer 
to use the same telegraf plugin within the docker to collect metrics rather 
than installing one on host. Please advice.

Telegraf version is 0.13.

Regards
Karthik

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/2e79778f-461f-4023-9a6c-5dbf2502444f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Not able to view new measurement after creating continuous query

2016-10-12 Thread tracyann . monteiro
I have a measurement that has the following fields in InfluxDB. The measurement 
name is data.

Measurement name: data
Time  Device  InterfaceMetric   
 Value
2016-10-11T19:00:00Zdevice1_nameint_nameIn_bits10
2016-10-11T19:00:00Zdevice2_nameint_nameIn_bits 5
.
.
.

I have created a continuous query as follows:

CREATE CONTINUOUS QUERY "test_query" ON "db_name" BEGIN SELECT sum("value") as 
Sumin INTO "data.copy" FROM "data" where metric = 'In_bits' GROUP BY time(15m), 
device END

After creating this, how do I see the results saved in new measurement 
data.copy? After creating this query, I am not able to find the new measurement 
that has been created. If I'm doing something wrong, I would like to get more 
input on this matter. Thanks!

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/8dd70a88-f5ae-43ed-8f21-083b1aedff7d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Re: Kapacitor and external InfluxDB.

2016-10-12 Thread codeprojtwo
вторник, 11 октября 2016 г., 21:07:28 UTC+3 пользователь nath...@influxdb.com 
написал:
> Kapacitor isn't receiving any data from InfluxDB. What is the output of
> 
> 
> show subscriptions
> 
> 
> and 
> 
> 
> kapacitor stats ingress
> 
> 
> 
> 
> On Tuesday, October 11, 2016 at 11:29:26 AM UTC-6, codep...@gmail.com 
> wrote:I'm having a hard time setting up a record. Currently kapacitor.conf 
> contains:
> [[influxdb]]
>   enabled = true
>   name = "cp"
>   default = true
>   urls = ["http://influx-cp.internal:8086;]
>   username = "root"
>   password = "root"
>   ssl-ca = ""
>   ssl-cert = ""
>   ssl-key = ""
>   insecure-skip-verify = false
>   timeout = "0"
>   disable-subscriptions = false
>   subscription-protocol = "http"
>   kapacitor-hostname = ""
>   http-port = 0
>   udp-bind = ""
>   udp-buffer = 1000
>   udp-read-buffer = 0
>   startup-timeout = "5m0s"
>   subscriptions-sync-interval = "1m0s"
>   [influxdb.subscriptions]
>   [influxdb.excluded-subscriptions]
>     _kapacitor = ["autogen"] 
> I'm launching it using this: 
> kapacitord -config kapacitor.conf
> And trying to record it using this:
> kapacitor record stream -task cpu_alert -duration 20s
> With basic script:
> stream
>     // Select just the cpu measurement from our example database.
>     |from()
>         .database('cp')
>     |alert()
>         // Whenever we get an alert write it to a file.
>         .log('/tmp/alerts.log')
> As expected - record hangs forever. What I'm trying to find out - is 
> kapacitor receiving any data at all? How to find out, and how to fix?
> The log during record is this:
> [run] 2016/10/11 20:28:08 I! Kapacitor starting, version 1.0.2, branch 
> master, commit 1011dba109bf3d83366c87873ec285c7f9140d34
> [run] 2016/10/11 20:28:08 I! Go version go1.6.3
> [srv] 2016/10/11 20:28:08 I! Kapacitor hostname: cp
> [srv] 2016/10/11 20:28:08 I! ClusterID: 8106cac2-bb07-4f1b-8b36-5f55fead461b 
> ServerID: 2eace6cb-8fb7-485c-9055-41bba6bc06cc
> [task_master:main] 2016/10/11 20:28:08 I! opened
> [task_store] 2016/10/11 20:28:08 W! could not open old boltd for task_store. 
> Not performing migration. Remove the `task_store.dir` configuration to 
> disable migration.
> [stats] 2016/10/11 20:28:08 I! opened service
> [httpd] 2016/10/11 20:28:08 I! Starting HTTP service
> [httpd] 2016/10/11 20:28:08 I! Authentication enabled: false
> [httpd] 2016/10/11 20:28:08 I! Listening on HTTP: [::]:9092
> [run] 2016/10/11 20:28:08 I! Listening for signals
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:26 +0300] "POST 
> /kapacitor/v1/recordings/stream HTTP/1.1" 201 213 "-" "KapacitorClient" 
> 1e169f2d-8fd8-11e6-8001- 14495
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:26 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1e656681-8fd8-11e6-8002- 1308
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:27 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1eb21dfd-8fd8-11e6-8003- 1270
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:27 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1efecdcf-8fd8-11e6-8004- 4212
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:28 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1f4bfd80-8fd8-11e6-8005- 1458
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:28 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1f98b84d-8fd8-11e6-8006- 3302
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:29 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1fe5c68e-8fd8-11e6-8007- 992
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:29 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 2032666d-8fd8-11e6-8008- 3905
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:30 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 207f7cda-8fd8-11e6-8009- 1025
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:30 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 20cc2039-8fd8-11e6-800a- 3076
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:31 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 21192566-8fd8-11e6-800b- 971
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:31 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 2165bc53-8fd8-11e6-800c- 592
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:32 +0300] "GET 

Re: [influxdb] Time-weighted mean?

2016-10-12 Thread deandob
On Friday, September 30, 2016 at 1:14:21 PM UTC+10, Sean Beckett wrote:
> That's not easy with InfluxDB, but you could build a UDF in Kapacitor to 
> handle that.
> 
> 
> On Thu, Sep 29, 2016 at 8:50 PM,   wrote:
> Hi,
> 
> 
> 
> Is there anyway in InfluxDB to calculate a time-weighted mean of measurements 
> that come in at irregular intervals?
> 
> 
> 
> Thanks,
> 
> Kyle
> 
> 
> 
> --
> 
> Remember to include the InfluxDB version number with all issue reports
> 
> ---
> 
> You received this message because you are subscribed to the Google Groups 
> "InfluxDB" group.
> 
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to influxdb+u...@googlegroups.com.
> 
> To post to this group, send email to infl...@googlegroups.com.
> 
> Visit this group at https://groups.google.com/group/influxdb.
> 
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/influxdb/7012f4a0-ed0b-4928-b0ad-f27de2d2da24%40googlegroups.com.
> 
> For more options, visit https://groups.google.com/d/optout.
> 
> 
> 
> 
> 
> -- 
> 
> 
> Sean Beckett
> Director of Support and Professional Services
> InfluxDB

I posted a feature request on GitHub for a time weighted average aggregation - 
it is a common use case for IoT sensor scenarios where sensors send data on 
change of state not at regular intervals. This feature is supported on openTSDB 
(aggregate interpolation) and should be a feature of any time-series database 
pitching for IoT scenarios.

Sean, can you let us know if this is feature that has a good chance of coming 
to Influxdb in the not too distant future? I'm tossing up between openTSDB and 
Influxdb and your solution is well suited to my use-case except for lack of 
time weighted average. So if it isn't on the radar for Influxdb please let us 
know. Also how difficult it would be to code up from a fork for someone with no 
Go experience

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/36f82f7a-1376-4e17-94ec-39a16c7f4bf6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] There was an error writing history file: open : The system cannot find the file specified.

2016-10-12 Thread deandob
On Wednesday, October 12, 2016 at 7:13:02 AM UTC+10, everyonel...@gmail.com 
wrote:
> What I did was download the latest zip, unzip to some local drive.  Started 
> up by double click on the influxd.exe with no modify to the influxdb.conf 
> file.  You should be able to reproduce this.  Thanks.

Hi, I'm running on Windows 10 and had this error until I set the environmental 
variable HOME to the directory that stores the executables, not the full path 
of the executable as you have posted.

So try again with 'set home=E:\influxdb-1.0.1_windows_amd64\influxdb-1.0.1-1' 
and it should work for you.

Hope this helps.

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/b8c0c28a-f90c-459b-907d-ac539bde7944%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Telegraf Cloudwatch metrics

2016-10-12 Thread kmg
I tried to pull AWS metrics through cloudwatch using telegraf and write the 
output data into influxdb.

When I run the test with sample config, it shows only EC2 stats . Not 
showing anything else.

For example, 
If I set the namespace to "AWS/EC2", then run the below command,

]# telegraf -config telegraf.conf -input-filter cloudwatch -test
>cloudwatch_aws_ec2,host=gugan.ipt.local,region=us-east-1,unit=percent 
cpu_utilization_average=0.08,cpu_utilization_maximum=0.16,cpu_utilization_minimum=0,cpu_utilization_sample_count=2,cpu_utilization_sum=0.16
 
14762515800
> 
cloudwatch_aws_ec2,host=gugan.ipt.local,image_id=ami-c481fad3,region=us-east-1,unit=bytes
 
disk_write_bytes_average=0,disk_write_bytes_maximum=0,disk_write_bytes_minimum=0,disk_write_bytes_sample_count=2,disk_write_bytes_sum=0
 
14762515800


If I changed namespace to EBS, it not showing any output
[root@gugan ~]# telegraf -config telegraf.conf -input-filter cloudwatch 
-test
* Plugin: cloudwatch, Collection 1
* Internal: 1m0s
[root@gugan ~]

Should I need to do any changes in AWS or else in Telegraf configuration ?.

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/921aca11-d47a-48c9-bbfa-8eeace51e122%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.