Thanks, Sean.
It is good to know what the limitations are. And good that I made a mistake
at the start and we kind of have a work around...
On 13 October 2016 at 16:24, Sean Beckett wrote:
> Tonya, when you write the data in ms but don't specify the precision, the
> database
Tonya, when you write the data in ms but don't specify the precision, the
database interprets those millisecond timestamps as nanoseconds, and all
the data is written to a single shard covering Jan 1, 1970.
> insert msns value=42 147633619
> select * from msns
name: msns
--
time
That's the entire source of the issue. The system is creating 1 week shards
from 1838 to now. That's a bit over 9000 shard groups, each of which only
has a few hundred points. The shard files are incredibly sparse, and the
overhead for each one is fixed.
Use shards durations of 10 years or more.
Hi Sean,
I can reproduce all the CPU issues, slowness, etc. if I try to import the
data that I have in milliseconds, specifying precision as milliseconds.
If I insert the same data without specifying any precision and query
without specifying any precision, the database is lightingly fast. The
Hi Sean,
The data is from 1838 to 2016, daily (sparse at times). We need to retain
it, therefore the default policy.
Thanks,
Tanya
On 13 October 2016 at 06:26, Sean Beckett wrote:
> Tanya, what range of time does your data cover? What are the retention
> policies on the
Tanya, what range of time does your data cover? What are the retention
policies on the database?
On Tue, Oct 11, 2016 at 11:14 PM, Tanya Unterberger <
tanya.unterber...@gmail.com> wrote:
> Hi Sean,
>
> 1. Initially I killed the process
> 2. At some point I restarted influxdb service
> 3. Error
Hi Sean,
1. Initially I killed the process
2. At some point I restarted influxdb service
3. Error logs show no errors
4. I rebuilt the server, installed the latest rpm. Reimported the data via
scripts. Data goes in, but the server is unusable. Looks like indexing
might be stuffed. The size of the
On Tue, Oct 11, 2016 at 12:11 AM, wrote:
> Hi,
>
> It seems that the old issue might have surfaced again (#3349) in v1.0.
>
> I tried to insert a large number of records (3913595) via a script,
> inserting 1 rows at a time.
>
> After a while I received
>
>