Re: [influxdb] Internal server error, timeout and unusable server after large imports

2016-10-13 Thread Tanya Unterberger
Thanks, Sean.

It is good to know what the limitations are. And good that I made a mistake
at the start and we kind of have a work around...

On 13 October 2016 at 16:24, Sean Beckett  wrote:

> Tonya, when you write the data in ms but don't specify the precision, the
> database interprets those millisecond timestamps as nanoseconds, and all
> the data is written to a single shard covering Jan 1, 1970.
>
>
> > insert msns value=42 147633619
> > select * from msns
> name: msns
> --
> time value
> 147633619 42
>
> > precision rfc3339
> > select * from msns
> name: msns
> --
> time value
> 1970-01-01T00:24:36.33619Z 42
>
> That's why everything is fast, because all the data is in one shard.
>
> On Wed, Oct 12, 2016 at 9:50 PM, Tanya Unterberger <
> tanya.unterber...@gmail.com> wrote:
>
>> Hi Sean,
>>
>> I can reproduce all the CPU issues, slowness, etc. if I try to import the
>> data that I have in milliseconds, specifying precision as milliseconds.
>>
>> If I insert the same data without specifying any precision and query
>> without specifying any precision, the database is lightingly fast. The same
>> data.
>>
>> The reason I was adding precision=ms is that I thought it was the right
>> thing to do. The manual advises that Influx sores the data in nanoseconds
>> but to use the lowest precision to insert. So at some stage I even used
>> hours, but inserted the data with precision=h. When Influx tried to convert
>> that data to nanoseconds, index, etc, then it was having a hissy fit.
>>
>> Is it a bug or the manual should state that if you query the data at the
>> same precision as you insert, then you can go with the lowest precision and
>> do not specify what precision you are inserting?
>>
>> Thanks,
>> Tanya
>>
>> On 13 October 2016 at 10:26, Tanya Unterberger <
>> tanya.unterber...@gmail.com> wrote:
>>
>>> Hi Sean,
>>>
>>> The data is from 1838 to 2016, daily (sparse at times). We need to
>>> retain it, therefore the default policy.
>>>
>>> Thanks,
>>> Tanya
>>>
>>> On 13 October 2016 at 06:26, Sean Beckett  wrote:
>>>
 Tanya, what range of time does your data cover? What are the retention
 policies on the database?

 On Tue, Oct 11, 2016 at 11:14 PM, Tanya Unterberger <
 tanya.unterber...@gmail.com> wrote:

> Hi Sean,
>
> 1. Initially I killed the process
> 2. At some point I restarted influxdb service
> 3. Error logs show no errors
> 4. I rebuilt the server, installed the latest rpm. Reimported the data
> via scripts. Data goes in, but the server is unusable. Looks like indexing
> might be stuffed. The size of the data in that database is 38M. Total size
> of /var/lib/influxdb/data/ 273M
> 5. CPU went beserk and doesn't come down
> 6. A query like select count(blah) to the measurement that was batch
> inserted (10k records at a time) is unusable and times out
> 7. I need to import around 15 million records. How should I throttle
> that?
>
> At the moment I am pulling my hair out (not a pretty sight)
>
> Thanks a lot!
> Tanya
>
> On 12 October 2016 at 06:11, Sean Beckett  wrote:
>
>>
>>
>> On Tue, Oct 11, 2016 at 12:11 AM, 
>> wrote:
>>
>>> Hi,
>>>
>>> It seems that the old issue might have surfaced again (#3349) in
>>> v1.0.
>>>
>>> I tried to insert a large number of records (3913595) via a script,
>>> inserting 1 rows at a time.
>>>
>>> After a while I received
>>>
>>> HTTP/1.1 500 Internal Server Error
>>> Content-Type: application/json
>>> Request-Id: ac8ebbbe-8f70-11e6-8ce7-
>>> X-Influxdb-Version: 1.0.0
>>> Date: Tue, 11 Oct 2016 05:12:02 GMT
>>> Content-Length: 20
>>>
>>> {"error":"timeout"}
>>> HTTP/1.1 100 Continue
>>>
>>> I killed the process, after which the whole box became pretty much
>>> unresponsive.
>>>
>>
>> Killed the InfluxDB process, or the batch writing script process?
>>
>>
>>>
>>> There is nothing in the logs (i.e. sudo ls /var/log/influxdb/ gives
>>> me nothing) although the setting for http logging is true:
>>>
>>
>> systemd OSes put the logs in a new place (yay!?). See
>> http://docs.influxdata.com/influxdb/v1.0/administration/logs/#systemd
>> for how to read the logs.
>>
>>
>>>
>>> [http]
>>>   enabled = true
>>>   bind-address = ":8086"
>>>   auth-enabled = true
>>>   log-enabled = true
>>>
>>> I tried to restart influx, but got the following error:
>>>
>>> Failed to connect to http://localhost:8086
>>> Please check your connection settings and ensure 'influxd' is
>>> running.
>>>
>>
>> The `influx` console is just a fancy wrapper on the API. That error
>> doesn't mean much except that the HTTP listener in 

[influxdb] Re: InfluxDB Raspberry Pi installation?

2016-10-13 Thread Frank Inselbuch
After you tar you will install it with dpkg.
It will probably get started automatically at that point.
But if you want to start it/restart/stop just

sudo service influxd start (or restart or stop)

by default I believe the server will be available for http: requests on 
port 8086

i would encourage you to also install something like grafana to pull data 
from influxdb, but you can roll your own with http requests

to work with the database interactively, use the CLI (command line 
interface)

just type 

$ influx

here's a few commands to get you started


Influx> create database test1
Influx> insert cars,vin=3948579834 year=2015,color='Green',mileage=15
Influx> select * from cars

Good luck.








On Thursday, October 13, 2016 at 7:39:27 AM UTC-5, EBRAddict wrote:
>
> Hi,
>
> I'd like to try InfluxDB on a Raspberry Pi 3 for a mobile sensor project. 
> Currently it's logging ~50 data points every 200ms to a USB flash drive 
> text file but I want to ramp that up to 200 every 10ms, or however fast I 
> can push data from the microcontrollers to the Pi.
>
> I downloaded and uncompressed the ARM binaries using the instructions on 
> the InfluxDB download page:
>
> wget 
> https://dl.influxdata.com/influxdb/releases/influxdb-1.0.2_linux_armhf.tar.gz
> tar xvfz influxdb-1.0.2_linux_armhf.tar.gz
>
>
> What are the next steps? I'm not a Linux guy but I can follow directions 
> if someone could point me to them. I'd like to configure the service(s) to 
> run at startup automatically and be accessible for querying by any logged 
> in user (it's a closed system).
>
> Thanks.
>

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/6b289b57-fbc7-41e7-b33a-e4ac69d29b2f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Re: question about last performance

2016-10-13 Thread Sean Beckett
See this comment on a LIMIT issue

for context on why the LAST() point in a series can be tough to define.
With no tags included, the system has to look for the LAST() point in every
possible series for that measurement. The time rage restricts the search
space, making that faster.

On Tue, Oct 11, 2016 at 8:14 PM, yueyue papa  wrote:

> If using
>
> SELECT last(glasses), machine, status, chambers, content, type, chamberID
> FROM eventlog WHERE time > now() -7d ,
>
> It will get the result quickly.  (7s.)
>
> It seems, the last is iterate whole database and then get the result.
>
> Why the last() function could not find the final recorder without iterate
> whole records and than return?
>
> Lee
>
>
> On Wed, Oct 12, 2016 at 9:56 AM, yueyue papa  wrote:
>
>> Hi
>>
>> My test influxdb (v1.0.0) has 710745 recorders in the measurement.  I try
>> to get the last record.  But the last recorder is working extremely slow.
>>
>> Is there any suggestion how to quick get the last recorder?
>>
>> Running status:
>>
>> select count(glasses) from "eventlog"  (takes about 12s )
>>
>> ###
>> qid   query   database  duration
>> 31"SELECT last(glasses), machine, status, chambers, content, type,
>> chamberID FROM eventlog""icmcc""6m6s"
>> 43"SHOW QUERIES""icmcc""1655266u"
>> ###
>>
>> This is my simple select, it already used 6 Mins, and has not finished.
>>
>> My test environment:
>> CPU: I7 3770
>> RAM:  16G.
>>
>> ubuntu server 16
>>
>>
>> Lee
>>
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to the Google Groups
> "InfluxData" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to influxdb+unsubscr...@googlegroups.com.
> To post to this group, send email to influxdb@googlegroups.com.
> Visit this group at https://groups.google.com/group/influxdb.
> To view this discussion on the web visit https://groups.google.com/d/ms
> gid/influxdb/CAO31Sf604n%3D0z-YLgs9829FCj6GV96rqkROiZTrohKCd
> 5QPu0w%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvMF1K8gvxhq_v8tE8dgjG%2BNSs726Q8u-4QnVwzVt8QqTg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] InfluxDB restarts every 24 hours and some data is missing

2016-10-13 Thread Sean Beckett
> Now, we have two issues, one is that server restart every 24h due to OOM,
look at this:

Does the RAM use spike every 24 hours, or does it slowly grow?

One of your tags is a MAC address. That has very high cardinality. How many
series are there in your system?
http://docs.influxdata.com/influxdb/v1.0/troubleshooting/frequently-asked-questions/#why-does-series-cardinality-matter

You also have 71 fields per point. Are you running any CQs to downsample
them?

What are the retention policy settings? (SHOW RETENTION POLICIES ON
)

> The other issue is that some data are missing. For example we see the
whole measurement is missing, while I am pretty sure that it's written
because at the same time when we write to influx we write to a file and we
don't see any errors from influx. We write 1000 measurements in one batch.

Do the InfluxDB logs show successful writes? Is the client receiving a 204
response to the writes?

What does "see the whole measurement is missing" mean? Can you show actual
CLI queries? This could be syntax issues.


On Thu, Oct 13, 2016 at 8:49 AM, Pavel  wrote:

> Hi guys,
>
> I have really strange problem with our Influxdb server. First of all we
> are running the latest version 1.0.2. We use Infuxdb to store some
> performance statistics from around 40k devices. We do this regularly every
> 5 minutes. The server in question has 40GB RAM and 16 CPU's. We keep data
> for 7 days and after that we use CQ to downsample and store it for 3 months.
> Example of one measurement looks like this:
>
> cm,mac=a2faa63e1c00,status=1 host_if="C5/1/4/UB",sw_rev="
> EPC3008",model="EPC3008-v302r125531-101220c",fl_ins=5,
> fl_miss=256,fl_padj=15,fl_crc=0,fl_flap=37,fl_hit=19482,fl_ltime="Sep 22
> 12:44:08",status_us2="sta",cw_good_us2=157500,cw_uncorr_us2=
> 507,cw_corr_us2=19064,tx_pwr_us2=45.00,snr_us2=31.41,rx_
> pwr_us2=29.00,status_us3="sta",cw_good_us3=237573,cw_uncorr_
> us3=2,cw_corr_us3=6909,tx_pwr_us3=45.00,snr_us3=34.18,rx_
> pwr_us3=29.00,cm_ip="172.16.11.15",mtc_mode=1,wideband_
> capable=1,prim_ds="Mo5/1/1:9",init_reason="POWER_ON",tto="6h11m",
> docsIfSigQUncorrectables.49=31,docsIfSigQSignalNoise.48=
> 390,docsIfSigQCorrecteds.53=16281,docsIfSigQCorrecteds.50=16003,
> docsIfSigQUncorrectables.51=144,docsIfSigQUncorrectables.48=179,
> docsIfSigQUncorrectables.3=18,docsIfSigQSignalNoise.3=389,
> docsIfSigQSignalNoise.54=398,docsIfDownChannelPower.50=0,
> docsIfDownChannelPower.49=-6,docsIfSigQUnerroreds.51=765089373,
> docsIfSigQCorrecteds.3=17400,docsIfSigQCorrecteds.49=16433,
> docsIfSigQSignalNoise.52=399,docsIfDownChannelPower.48=-18,
> ifHCOutOctets.1=368789007,docsIfDownChannelPower.54=-9,
> docsIfSigQCorrecteds.54=16376,ifHCInOctets.1=48467216,
> docsIfSigQUnerroreds.52=765083145,docsIfSigQCorrecteds.48=17615,
> docsIfSigQSignalNoise.51=394,docsIfSigQUnerroreds.50=765097168,
> docsIfSigQCorrecteds.51=16009,docsIfSigQSignalNoise.53=399,
> docsIfDownChannelPower.53=-2,docsIfSigQUnerroreds.53=765074315,
> docsIfSigQSignalNoise.50=393,docsIfSigQUnerroreds.48=765110628,
> docsIfSigQUncorrectables.53=13,docsIfSigQUnerroreds.3=765195092,
> docsIfSigQUncorrectables.54=74,docsIfDownChannelPower.51=
> 0,docsIfSigQUnerroreds.54=765068049,docsIfDownChannelPower.3=-14,
> docsIfSigQSignalNoise.49=394,docsIfDownChannelPower.52=1,
> docsIfSigQUnerroreds.49=765105625,docsIfSigQCorrecteds.52=15876,
> docsIfSigQUncorrectables.50=38,docsIfSigQUncorrectables.52=16 1474563439
>
> Now, we have two issues, one is that server restart every 24h due to OOM,
> look at this:
>
> Sep 29 20:34:21 node1 kernel: influxd invoked oom-killer:
> gfp_mask=0x280da, order=0, oom_score_adj=0
> Sep 30 20:04:15 node1 kernel: influxd invoked oom-killer:
> gfp_mask=0x280da, order=0, oom_score_adj=0
> Oct  3 20:04:32 node1 kernel: influxd invoked oom-killer:
> gfp_mask=0x280da, order=0, oom_score_adj=0
> Oct  4 20:04:35 node1 kernel: influxd invoked oom-killer:
> gfp_mask=0x200da, order=0, oom_score_adj=0
> Oct  5 20:04:45 node1 kernel: influxd invoked oom-killer:
> gfp_mask=0x280da, order=0, oom_score_adj=0
> Oct  6 20:04:46 node1 kernel: influxd invoked oom-killer:
> gfp_mask=0x280da, order=0, oom_score_adj=0
>
> and so on. The other issue is that some data are missing. For example we
> see the whole measurement is missing, while I am pretty sure that it's
> written because at the same time when we write to influx we write to a file
> and we don't see any errors from influx. We write 1000 measurements in one
> batch.
>
> I would really appreciate some help in resolving this issue since
> everything else works perfectly.
>
> Thank you.
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to the Google Groups
> "InfluxData" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to influxdb+unsubscr...@googlegroups.com.
> To post to this group, send email to influxdb@googlegroups.com.
> Visit this group at 

[influxdb] /Var directory gets full after a week of posting data to InfluxDB

2016-10-13 Thread tracyann . monteiro
I have influxDB 1.0.0 running on one of my VM's. I have created a DB with a 
retention policy of one hour and a Python script that posts data to InfluxDB 
every 5 minutes. After running InfuxDB for a week, the /var directory gets 
full. Is there any way of avoiding this?

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/27e431a7-2126-4c71-aa47-51b3f57e7041%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Re: Kapacitor and external InfluxDB.

2016-10-13 Thread codeprojtwo
вторник, 11 октября 2016 г., 20:29:26 UTC+3 пользователь codep...@gmail.com 
написал:
> I'm having a hard time setting up a record. Currently kapacitor.conf contains:
> 
> [[influxdb]]
>   enabled = true
>   name = "cp"
>   default = true
>   urls = ["http://influx-cp.internal:8086;]
>   username = "root"
>   password = "root"
>   ssl-ca = ""
>   ssl-cert = ""
>   ssl-key = ""
>   insecure-skip-verify = false
>   timeout = "0"
>   disable-subscriptions = false
>   subscription-protocol = "http"
>   kapacitor-hostname = ""
>   http-port = 0
>   udp-bind = ""
>   udp-buffer = 1000
>   udp-read-buffer = 0
>   startup-timeout = "5m0s"
>   subscriptions-sync-interval = "1m0s"
>   [influxdb.subscriptions]
>   [influxdb.excluded-subscriptions]
> _kapacitor = ["autogen"] 
> 
> I'm launching it using this: 
> 
> kapacitord -config kapacitor.conf
> 
> And trying to record it using this:
> 
> kapacitor record stream -task cpu_alert -duration 20s
> 
> With basic script:
> 
> stream
> // Select just the cpu measurement from our example database.
> |from()
> .database('cp')
> |alert()
> // Whenever we get an alert write it to a file.
> .log('/tmp/alerts.log')
> 
> As expected - record hangs forever. What I'm trying to find out - is 
> kapacitor receiving any data at all? How to find out, and how to fix?
> 
> The log during record is this:
> [run] 2016/10/11 20:28:08 I! Kapacitor starting, version 1.0.2, branch 
> master, commit 1011dba109bf3d83366c87873ec285c7f9140d34
> [run] 2016/10/11 20:28:08 I! Go version go1.6.3
> [srv] 2016/10/11 20:28:08 I! Kapacitor hostname: cp
> [srv] 2016/10/11 20:28:08 I! ClusterID: 8106cac2-bb07-4f1b-8b36-5f55fead461b 
> ServerID: 2eace6cb-8fb7-485c-9055-41bba6bc06cc
> [task_master:main] 2016/10/11 20:28:08 I! opened
> [task_store] 2016/10/11 20:28:08 W! could not open old boltd for task_store. 
> Not performing migration. Remove the `task_store.dir` configuration to 
> disable migration.
> [stats] 2016/10/11 20:28:08 I! opened service
> [httpd] 2016/10/11 20:28:08 I! Starting HTTP service
> [httpd] 2016/10/11 20:28:08 I! Authentication enabled: false
> [httpd] 2016/10/11 20:28:08 I! Listening on HTTP: [::]:9092
> [run] 2016/10/11 20:28:08 I! Listening for signals
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:26 +0300] "POST 
> /kapacitor/v1/recordings/stream HTTP/1.1" 201 213 "-" "KapacitorClient" 
> 1e169f2d-8fd8-11e6-8001- 14495
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:26 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1e656681-8fd8-11e6-8002- 1308
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:27 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1eb21dfd-8fd8-11e6-8003- 1270
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:27 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1efecdcf-8fd8-11e6-8004- 4212
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:28 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1f4bfd80-8fd8-11e6-8005- 1458
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:28 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1f98b84d-8fd8-11e6-8006- 3302
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:29 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 1fe5c68e-8fd8-11e6-8007- 992
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:29 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 2032666d-8fd8-11e6-8008- 3905
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:30 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 207f7cda-8fd8-11e6-8009- 1025
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:30 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 20cc2039-8fd8-11e6-800a- 3076
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:31 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 21192566-8fd8-11e6-800b- 971
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:31 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 2165bc53-8fd8-11e6-800c- 592
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:32 +0300] "GET 
> /kapacitor/v1/recordings/24fdaef0-71a2-418e-85dc-5ce7d37f4399 HTTP/1.1" 202 
> 213 "-" "KapacitorClient" 21b24533-8fd8-11e6-800d- 4535
> [httpd] 127.0.0.1 - - [11/Oct/2016:20:28:32 +0300] "GET 
> 

[influxdb] InfluxDB Raspberry Pi installation?

2016-10-13 Thread EBRAddict
Hi,

I'd like to try InfluxDB on a Raspberry Pi 3 for a mobile sensor project. 
Currently it's logging ~50 data points every 200ms to a USB flash drive 
text file but I want to ramp that up to 200 every 10ms, or however fast I 
can push data from the microcontrollers to the Pi.

I downloaded and uncompressed the ARM binaries using the instructions on 
the InfluxDB download page:

wget 
https://dl.influxdata.com/influxdb/releases/influxdb-1.0.2_linux_armhf.tar.gz
tar xvfz influxdb-1.0.2_linux_armhf.tar.gz


What are the next steps? I'm not a Linux guy but I can follow directions if 
someone could point me to them. I'd like to configure the service(s) to run 
at startup automatically and be accessible for querying by any logged in 
user (it's a closed system).

Thanks.

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/72ad2b09-5754-42b0-91c1-5ff459d7d785%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Re: Reg usage of telegraf for Docker Stats within Container

2016-10-13 Thread kostas
On Wednesday, October 12, 2016 at 6:33:18 PM UTC+3, pck...@gmail.com wrote:
> Hi,
> 
> I am using Telegraf in my current production environment to read the jolokia 
> for JVM Stats and saves it in Influxdb. In the same way I would like to use 
> the same telegraf plugin to post the Docker Stats to InfluxDb. We are using 
> docker to deploy our APIs. Telegraf is installed within the Docker Container. 
> I prefer to use the same telegraf plugin within the docker to collect metrics 
> rather than installing one on host. Please advice.
> 
> Telegraf version is 0.13.
> 
> Regards
> Karthik

Hi,

We strongly suggest using the latest Telegraf version 1.0.1

I am not sure I understand the question correctly. When installed inside a 
container, Telegraf will normally collect data for the container itself.

It is possible to collect host metrics from inside the container though, by 
exposing etc,sys,proc,var volumes. See the following example:

docker run -d --hostname=name -e "HOST_PROC=/rootfs/proc" -e 
"HOST_SYS=/rootfs/sys" -e "HOST_ETC=/rootfs/etc" -v 
/var/run/docker.sock:/var/run/docker.sock:ro -v /sys:/rootfs/sys:ro -v 
/proc:/rootfs/proc:ro -v /etc:/rootfs/etc:ro telegraf


-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/2af5819f-6ad0-43e2-8f2e-f0a92768fdda%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.