While it is not impossible, we do not recommend writing data back into the
same measurement from which it is queried, unless there are explicit
controls to prevent recursion. For example, adding a tag to the processed
data that is explicitly excluded from the queried data fed into Kapacitor.
On
Mathias, I agree that irregular timestamps do lead to more space taken on
disk. However, that number still falls under 3 bytes per numeric value,
even with irregular nanosecond timestamps. Both what Jason and I are saying
is true; they are not mutually exclusive.
> The 2 or 3 bytes footprint is
Thank you Sean,
I will try to optimize things in our setup and will monitor
https://github.com/influxdata/influxdb/issues/7142 for any updates.
Thank you.
--
Remember to include the version number!
---
You received this message because you are subscribed to the Google Groups
"InfluxData"
Efficiency of compression depends on different factors, including interval
between timestamps, resolution of said timestamps, volatility and type of
values.
The 2 or 3 bytes footprint is usually not achievable if you store your
timestamps with milli, micro or nanosecond precision as the delta
Thank you for your answer.
So let's admit that I have 2 metrics collected every 15 seconds during 24h
- cpu with 20 values
- mem with 10 values
The total size of stored data will be 3 bytes * 8 metrics per minutes *
1440 minutes ?
Le mardi 18 octobre 2016 18:57:53 UTC+2, Sean Beckett a
TICKscript:
var running = batch
|query('''select count(value) from
"collectd_db"."default".marathon_tasks_value where host =~
/daldevmesoszk01.dev-1/ and instance =~ /amber/''')
.period(20s)
.every(10s)
.groupBy('host')
|httpOut('running')
var expected = batch
A simple `|sum('emiited')` should work, but its hard to tell without the
context of your TICkscirpt. Can you share your TICKscript that is producing
that data?
On Tuesday, October 18, 2016 at 6:07:12 PM UTC-6, Vinit wrote:
>
> I have data like this, I want to sum of the column "emitted" across
Hello everyone,
I have a question concerning specifying the db name in the Kapacitor exec node.
I have a tick script which calls a python script using the .exec node.
The tick script looks like this:
stream
|from()
.measurement('a')
|where(lambda: "val"=="1")
|alert()
However they are converted to nanoseconds when being written to the
InfluxDB service. This means you're always paying for nanosecond precision
when you don't need it.
private StringBuilder formatedTime() {
final StringBuilder sb = new StringBuilder();
if (null == this.time) {
Awesome. Thanks!
On Monday, October 17, 2016 at 4:32:59 PM UTC-5, camero...@gmail.com wrote:
>
> > Q1: Are these just gathered if they're present and otherwise ignored?
>
> yes
>
> > Q2: What if they're invalid or null? For example if you're forced to
> monitor a MySQL install that has the
When you create your Point to be written, you can specify the precision.
Builder dataEvBldr = Point.measurement(evtGroupMeasurement
.getMeasurementName())
.time(time, TimeUnit.MILLISECONDS)
.tag("groupname", group)
Is there a way to get results back with a time offset?
I understand that timezones are not really supported yet... so i'm
wondering if there is a workaround
Perhaps something like:
SELECT field, time-7h as pdt FROM measurement LIMIT 10
Any suggestions?
Thanks
ryan
--
Remember to
A time series database is not a relational database, and comes with
intentional limitations to increase throughput. I don't know if raw write
performance is the best way to differentiate the tools. InfluxDB can handle
writes of ~500k values per second on a reasonable server, and
InfluxEnterprise
What do the following return?
SHOW RETENTION POLICIES ON db0
SHOW RETENTION POLICIES ON db1
On Wed, Oct 19, 2016 at 2:49 AM, manish jain wrote:
> Hello Sean,
> This is the import file i am using -
> ---Cut--
>
Only floats, using coarse precision, 30 values recorded every 15 seconds:
30 values per write * 86400 seconds per day * 1/15 writes per second * 3
bytes per value = ~52, or about half a megabyte per day, once fully
compacted.
On Wed, Oct 19, 2016 at 5:50 AM, Guillaume Berthomé <
There is no way to get InfluxDB to return a timestamp in anything but UTC.
You can use the GROUP BY time(interval,offset) syntax to move the UTC
boundaries to match other timezones.
https://docs.influxdata.com/influxdb/v1.0//query_language/data_exploration/#configured-group-by-time-boundaries
It seems Jason just posted the exact same findings as me, maybe check with
him directly, I persist in my saying, if your timestamps are not regularly
spaced then compression efficiency will decrease.
On Wednesday, October 19, 2016 at 8:33:56 PM UTC+2, Sean Beckett wrote:
>
> Mathias, I question
17 matches
Mail list logo