Since you have no templates enabled, it does seem that the measurement
names would be as you describe. I'm not very familiar with the StatsD input
plugin for Telegraf.

Can you disable the other plugins in Telegraf for a while to verify that
the writes are reaching InfluxDB? The log entries show Telegraf is writing
to InfluxDB, but we don't know that those writes include the StatsD
payloads.

Do the Telegraf logs show any errors?

On Fri, Aug 19, 2016 at 1:19 AM, Damião Rodrigues <[email protected]>
wrote:

> Hi Sean,
>
> Wow, that sounds like a good guess!
>
> Unfortunately, when I list the tables in influxdb (via 'show
> measurements'), I can't even find an entry for the StatsD metrics. I'll
> explain: I assume that for a StatsD metric like the one below, I would find
> a 'measurement' (i.e. table) with name 'statsd_swift-object-
> replicator_partition_update_timing', correct?
>
> ```
> swift-object-replicator.partition.update.timing:6.03103637695|ms
> ```
>
> Unfortunately I don't find any measurements with a 'statsd_*' prefix. I
> can only find measurements for metrics I already know about and are being
> correctly exported by telegraf.
>
> Best,
> Damião
>
> On Thu, Aug 18, 2016 at 10:41 PM, Sean Beckett <[email protected]> wrote:
>
>> It seems that StatsD is sending timestamps in milliseconds, but the
>> Telegraf writes are at nanosecond precision ("precision=ns" in the query
>> string.)
>>
>> I suspect your data is in InfluxDB, it's just tightly clustered a few
>> seconds after midnight, Jan 1 1970, which is what a millisecond timestamp
>> looks like when converted to nanoseconds since epoch.
>>
>> Can you query close to epoch 0 and see if you have results?
>>
>> On Tue, Aug 16, 2016 at 3:45 AM, Damião Rodrigues <[email protected]>
>> wrote:
>>
>>> Hi Sean,
>>>
>>> Thanks for the answer. I'll answer your queries by parts.
>>>
>>> 1) telegraf can write to influxdb:
>>>
>>>    - Other metrics other than StatsD metrics are being written (e.g.
>>>    I'm using the Docker and exec input plugins)
>>>    - I get the following influxdb log entries at regular intervals (the
>>>    IP address of the telegraf server is indeed 10.42.75.237):
>>>
>>> ```
>>>
>>> [httpd] 2016/08/16 09:01:11 10.42.75.237 - influxdb
>>> [16/Aug/2016:09:01:11 +0000] POST 
>>> /write?consistency=&db=metrics&precision=ns&rp=default
>>> HTTP/1.1 204 0 - InfluxDBClient fa9967c1-638f-11e6-98c8-000000000000
>>> 100.970492ms
>>> [httpd] 2016/08/16 09:01:16 10.42.75.237 - influxdb
>>> [16/Aug/2016:09:01:16 +0000] POST 
>>> /write?consistency=&db=metrics&precision=ns&rp=default
>>> HTTP/1.1 204 0 - InfluxDBClient fd928252-638f-11e6-98ce-000000000000
>>> 104.447255ms
>>> [httpd] 2016/08/16 09:01:17 10.42.75.237 - influxdb
>>> [16/Aug/2016:09:01:17 +0000] POST 
>>> /write?consistency=&db=metrics&precision=ns&rp=default
>>> HTTP/1.1 204 0 - InfluxDBClient fdcff4c3-638f-11e6-98cf-000000000000
>>> 60.178059ms
>>>
>>> ```
>>>
>>> 2) Output configuration
>>>
>>> ```
>>>
>>> [[outputs.influxdb]]
>>>   urls = ["http://influxdb:8086";]
>>>   database = "metrics" # required
>>>   retention_policy = "default"
>>>   username = <my-user>
>>>   password = <my-pass>
>>>
>>> ```
>>>
>>> 3) Running SHOW RETENTION POLICIES ON 'metrics'
>>>
>>> ```
>>>
>>> > SHOW RETENTION POLICIES ON metrics
>>> name    duration        shardGroupDuration      replicaN        default
>>> default 0               168h0m0s                1               true
>>>
>>> ```
>>>
>>> Best,
>>>
>>> Damiao
>>>
>>> On Thu, Aug 11, 2016 at 10:21 PM, Sean Beckett <[email protected]>
>>> wrote:
>>>
>>>> What do the InfluxDB logs show? Are there writes coming from the
>>>> Telegraf server? The Telegraf logs indicate it is writing to InfluxDB.
>>>>
>>>> Can you include the output configuration
>>>> <https://docs.influxdata.com/telegraf/v0.13/administration/configuration/#output-configuration>?
>>>> Also the results of running "SHOW RETENTION POLICIES ON <db>" against
>>>> InfluxDB, where <db> is replaced by the destination database.
>>>>
>>>> On Mon, Aug 8, 2016 at 7:27 AM, <[email protected]> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I followed this tutorial (https://influxdata.com/blog/g
>>>>> etting-started-with-sending-statsd-metrics-to-telegraf-influxdb/) to
>>>>> get a statsd > telegraf > influxdb setup working. However, I've noticed
>>>>> that while the statsd metrics reach telegraf, they are not relayed to
>>>>> influxdb.
>>>>>
>>>>> Do you have an idea of what might be the problem? Find more
>>>>> information below.
>>>>>
>>>>> 1) telegraf version 0.13.1 / influxdb 0.13.0
>>>>>
>>>>> 2) telegraf seems to get some statsd metrics, even though the
>>>>> processing time seems suspiciously short:
>>>>>
>>>>> ```
>>>>> [docker] gathered metrics, (5s interval) in 2.270456355s
>>>>> 2016/08/08 13:16:18 Output [influxdb] buffer fullness: 900 / 20000
>>>>> metrics. Total gathered metrics: 261096. Total dropped metrics: 0.
>>>>> 2016/08/08 13:16:18 Output [influxdb] wrote batch of 900 metrics in
>>>>> 104.723614ms
>>>>>
>>>>> 2016/08/08 13:16:20 Input [statsd] gathered metrics, (5s interval) in
>>>>> 91.964µs
>>>>>
>>>>> 2016/08/08 13:16:20 Input [memcached] gathered metrics, (5s interval)
>>>>> in 3.099963ms
>>>>> ```
>>>>>
>>>>> 3) here's the relevant configuration for telegraf.conf:
>>>>>
>>>>> ```
>>>>>
>>>>> [[inputs.statsd]]
>>>>>   ## Address and port to host UDP listener on
>>>>>   service_address = ":8125"
>>>>>   ## Delete gauges every interval (default=false)
>>>>>   delete_gauges = false
>>>>>   ## Delete counters every interval (default=false)
>>>>>   delete_counters = false
>>>>>   ## Delete sets every interval (default=false)
>>>>>   delete_sets = false
>>>>>   ## Delete timings & histograms every interval (default=true)
>>>>>   delete_timings = true
>>>>>   ## Percentiles to calculate for timing & histogram stats
>>>>>   percentiles = [90]
>>>>>
>>>>>   ## separator to use between elements of a statsd metric
>>>>>   metric_separator = "_"
>>>>>
>>>>>   ## Parses tags in the datadog statsd format
>>>>>   ## http://docs.datadoghq.com/guides/dogstatsd/
>>>>>   parse_data_dog_tags = false
>>>>>
>>>>>   ## Statsd data translation templates, more info can be read here:
>>>>>   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA
>>>>> _FORMATS_INPUT.md#graphite
>>>>>   # templates = [
>>>>>   #     "cpu.* measurement*"
>>>>>   # ]
>>>>>   ## Number of UDP messages allowed to queue up, once filled,
>>>>>   ## the statsd server will start dropping packets
>>>>>   allowed_pending_messages = 10000
>>>>>
>>>>>   ## Number of timing/histogram values to track per-measurement in the
>>>>>   ## calculation of percentiles. Raising this limit increases the
>>>>> accuracy
>>>>>   ## of percentiles but also increases the memory usage and cpu time.
>>>>>   percentile_limit = 1000
>>>>>
>>>>> ```
>>>>>
>>>>> 4) here's an example of the statsd metrics being received at
>>>>> telegraf's host (obtained via `tcpdump`):
>>>>>
>>>>> ```
>>>>> 13:20:49.431549 IP (tos 0x0, ttl 64, id 2976, offset 0, flags [DF],
>>>>> proto UDP (17), length 92)
>>>>>     big43.local.60178 > macmini6.local.8125: [udp sum ok] UDP, length
>>>>> 64
>>>>>         0x0000:  4500 005c 0ba0 4000 4011 9d51 c0a8 082b
>>>>> E..\..@[email protected]...+
>>>>>         0x0010:  c0a8 0824 eb12 1fbd 0048 d989 7377 6966
>>>>> ...$.....H..swif
>>>>>         0x0020:  742d 6f62 6a65 6374 2d72 6570 6c69 6361
>>>>> t-object-replica
>>>>>         0x0030:  746f 722e 7061 7274 6974 696f 6e2e 7570
>>>>> tor.partition.up
>>>>>         0x0040:  6461 7465 2e74 696d 696e 673a 362e 3033
>>>>> date.timing:6.03
>>>>>         0x0050:  3130 3336 3337 3639 357c 6d73            103637695|ms
>>>>> ```
>>>>>
>>>>> note the 
>>>>> `swiftt-object-replicator.partition.update.timing:6.04103637695|ms`.
>>>>> this is therefore of the statsd 'timing' type, which is supposedly
>>>>> supported by telegraf.
>>>>>
>>>>> 5) i've noticed that the measurements don't show up in influxdb. in
>>>>> fact, i don't even spot any outgoing messages to influxdb exporting the
>>>>> statsd metrics!
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Damião
>>>>>
>>>>> --
>>>>> Remember to include the InfluxDB version number with all issue reports
>>>>> ---
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "InfluxDB" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to [email protected].
>>>>> To post to this group, send email to [email protected].
>>>>> Visit this group at https://groups.google.com/group/influxdb.
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/influxdb/6ba76909-d4a5-4a5
>>>>> 8-b0c5-55e33f1462f9%40googlegroups.com.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Sean Beckett
>>>> Director of Support and Professional Services
>>>> InfluxDB
>>>>
>>>> --
>>>> Remember to include the InfluxDB version number with all issue reports
>>>> ---
>>>> You received this message because you are subscribed to a topic in the
>>>> Google Groups "InfluxDB" group.
>>>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>>>> pic/influxdb/4gYkiSu4PUk/unsubscribe.
>>>> To unsubscribe from this group and all its topics, send an email to
>>>> [email protected].
>>>> To post to this group, send email to [email protected].
>>>> Visit this group at https://groups.google.com/group/influxdb.
>>>> To view this discussion on the web visit https://groups.google.com/d/ms
>>>> gid/influxdb/CALGqCvOJ9cjd_UkeaY0J2YO07ekg5kQ68QehKnjnK2MBQ0
>>>> vn0A%40mail.gmail.com
>>>> <https://groups.google.com/d/msgid/influxdb/CALGqCvOJ9cjd_UkeaY0J2YO07ekg5kQ68QehKnjnK2MBQ0vn0A%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>
>>
>>
>> --
>> Sean Beckett
>> Director of Support and Professional Services
>> InfluxDB
>>
>
>


-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the InfluxDB version number with all issue reports
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvMkPjb85XyAsyeCJeK9W%2Bt5UdqvE8qRbHfT-%2BEDxzcMVg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to