Re: Handling Duplicate Timestamps

2024-05-19 Thread Jialin Qiao
Hi Trevor, Yes, IoTDB cannot handle this scenario currently because our primary key is Path + Timestamp. This year we will focus on the table model, a lot work to do :-) Jialin Qiao Trevor Hart 于2024年5月20日周一 09:45写道: > > Hi Jialin > > > > Yes the values would be different. > > > > As as

Re: Handling Duplicate Timestamps

2024-05-19 Thread Trevor Hart
Hi Jialin Yes the values would be different. As as example, these are from a web server log. The device is openzweb01 which is an IIS web server which may handle multiple requests at the same time. The rows are unique in their own right but the timestamp is the same in the logging. 

Re: Handling Duplicate Timestamps

2024-05-17 Thread Jialin Qiao
Hi Trevor, Will different values of the same timestamp be the same? 1. Same Time, Value 1, 1 1, 1 1, 1 2. Different Time, Value 1, 1 1, 2 1, 1 Jialin Qiao Trevor Hart 于2024年5月14日周二 11:20写道: > > Thank you! I will implment some work around for now. > > > I would appreciate some consideration

Re: Handling Duplicate Timestamps

2024-05-13 Thread Trevor Hart
Thank you! I will implment some work around for now. I would appreciate some consideration for this option in the future. Thanks  Trevor Hart Ope Limited w: http://www.ope.nz/ m: +64212728039 On Tue, 14 May 2024 15:17:47 +1200 Xiangdong Huang wrote --- > 1. Checking

Re: Handling Duplicate Timestamps

2024-05-13 Thread Xiangdong Huang
> 1. Checking before insert if the timestamp already exists and remedy on the > client before resend > 2. Moving to Nanosecond and introducing some insignificant time value to keep > timestamp values unique. Yes these maybe the best solutions for a specific application. Analysis for IoTDB: -

Re: Handling Duplicate Timestamps

2024-05-13 Thread Trevor Hart
Hello Yuan Correct, the first timestamp and values should be retained. I realise this is does not align with the current design. I was just asking whether there was an existing option to operate to block duplicates. In a normal RDBMS if you try to insert with a duplicate the insert will

Re: Handling Duplicate Timestamps

2024-05-13 Thread Yuan Tian
Hi Trevor, By "rejects duplicates", you mean you want to keep the first duplicate timestamp and its corresponding values?(because the following duplicated ones will be rejected) Best regards, Yuan Tian On Mon, May 13, 2024 at 6:24 PM Trevor Hart wrote: > > > > >

Re: Handling Duplicate Timestamps

2024-05-13 Thread Trevor Hart
Correct. I’m not disputing that. What I’m asking is that it would be good to have a configuration that either allows overwrites or rejects duplicates.My scenario is request log data from a server (the device). As it may be processing multiple requests at once

Re: Handling Duplicate Timestamps

2024-05-09 Thread Jialin Qiao
Hi, In IoT or IIoT scenarios, we thought each data point represent a metric of a timestamp.In which case you need to store duplicated values? Take this for an example: Time, root.sg1.car1.speed 1, 1 1, 2 Could a car has different speed at time 1? Jialin Qiao Yuan Tian 于2024年5月9日周四 18:51写道:

Re: Handling Duplicate Timestamps

2024-05-09 Thread Yuan Tian
Hi Trevor, Now we will override the duplicate timestamp with a newer one. There is nothing we can do about it now. Best regards, --- Yuan Tian On Wed, May 8, 2024 at 5:31 PM Trevor Hart wrote: > Hello > > > > I’m aware that when inserting a duplicate timestamp the values will

Handling Duplicate Timestamps

2024-05-08 Thread Trevor Hart
Hello I’m aware that when inserting a duplicate timestamp the values will be overwritten. This will obviously result in data loss.  Is there a config/setting to reject or throw an error on duplicate inserts? Although highly unlikely I would prefer to be alerted to the situation rather

Handling Duplicate Timestamps

2024-05-02 Thread Trevor Hart
Hello I’m aware that when inserting a duplicate timestamp the values will be overwritten. This can obviously result in data loss.  Is there a config/setting to reject or throw an error on duplicate inserts? Although highly unlikely I would prefer to be alerted to the situation rather than