I would not pay too much attention to the CERN document as it does not
cover a use case with much data (600M datapoints) and as its findings on
OpenTSDB seem to indicate that their test setup was probably not very good.
On Tuesday, January 10, 2017 at 1:19:23 AM UTC+1, bade...@gmail.com wrote:
This is probably unnecessary when using InfluxDB as the storage engine
(TSM) will not use many bits to represent the identical values.
On Saturday, November 26, 2016 at 3:04:34 PM UTC+1, Simon Christmann wrote:
>
> I'm usually a MySQL guy, so excuse and please correct me if I'm using the
>
ss compact. Over
> time the steady state of the system will approach ~2.5 bytes per numeric
> field.
>
> On Wed, Oct 19, 2016 at 5:45 AM, Mathias Herberts <mathias@gmail.com
> > wrote:
>
>> Efficiency of compression depends on different factors, includi
Efficiency of compression depends on different factors, including interval
between timestamps, resolution of said timestamps, volatility and type of
values.
The 2 or 3 bytes footprint is usually not achievable if you store your
timestamps with milli, micro or nanosecond precision as the delta
Your CQ completed in 3m27s, does it manipulate a very large amount of data?
On Monday, October 17, 2016 at 10:43:22 PM UTC+2, pavel...@gmail.com wrote:
>
> Heh believe it or not once again I got an OOM error! And it's becomes
> really 'funny' that it happens at the same time? Look at this:
>
>
If I understand you query correctly you are fetching a single datapoint per
series (time >= '2016-09-06 21:05:40' AND time <= '2016-10-06 21:05:40'),
so your MEAN computation either operates on a single series, which kinda
defies the point, or on multiple series with identical value for tag
m to drop data
> based on write time. Presumably you could give each user their own
> database, or databases, and when the time limit has expired, simply DROP
> DATABASE.
>
> On Tue, Sep 6, 2016 at 3:29 PM, Mathias Herberts <mathias@gmail.com
> > wrote:
>
I was wondering if RPs only took into consideration the timestamps of the
datapoints or if they could be configured to consider the time at which the
data was inserted into the db.
My question is to understand how RPs should be enforced for users who want
to insert large historical datasets