On Tue, Aug 02 2016, gordon chung wrote:
> so from very rough testing, we can choose to lower it to 3600points which
> offers better split opportunities with negligible improvement/degradation, or
> even more to 900points with potentially small write degradation (massive
> batching).
3600 points
On 29/07/16 03:29 PM, gordon chung wrote:
i'm using Ceph. but i should mention i also only have 1 thread enabled
because python+threading is... yeah.
i'll give it a try again with threads enabled.
I tried this again with 16 threads. as expected, python (2.7.x) threads do jack
all.
i also tr
-
From: gordon chung [mailto:g...@live.ca]
Sent: Thursday, July 28, 2016 3:05 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [gnocchi] typical length of timeseries data
hi folks,
this is probably something to discuss on ops list as well eventually but
what do you think about
On 29/07/2016 12:20 PM, Julien Danjou wrote:
> On Fri, Jul 29 2016, gordon chung wrote:
>
>> so at first glance, it doesn't really seem to affect performance much
>> whether it's one 'larger' file or many smaller files.
>
> I guess it's because your storage system latency (file?) does not make a
On Fri, Jul 29 2016, gordon chung wrote:
> so at first glance, it doesn't really seem to affect performance much
> whether it's one 'larger' file or many smaller files.
I guess it's because your storage system latency (file?) does not make a
difference. I imagine that over Swift or Ceph, it migh
On 29/07/2016 5:00 AM, Julien Danjou wrote:
> Best way is probably to do some bench… but I think it really depends on
> the use cases here. The interest of having many small splits is that you
> can parallelize the read.
>
> Considering the compression ratio we have, I think we should split in
>
On Thu, Jul 28 2016, gordon chung wrote:
> this is probably something to discuss on ops list as well eventually but
> what do you think about shrinking the max size of timeseries chunks from
> 14400 to something smaller? i'm curious to understand what the length of
> the typical timeseries is.
hi folks,
this is probably something to discuss on ops list as well eventually but
what do you think about shrinking the max size of timeseries chunks from
14400 to something smaller? i'm curious to understand what the length of
the typical timeseries is. my main reason for bringing this up is