> so that one can find more than one row for the same values of (Id,Date)?
Yes and that messes up your idea. Solution is to tidy that table up first
before running any select SQL.
> Average, last one, largest one?
For now I have taken the largest one as in R Smith's SQL. Probably it is
better to
> I had a look at this and tried it without using the extra
auto-increment field, using the table ROWID
> instead to check for consecutiveness. It would work great and a lot
faster indeed if it wasn't for
> the multiple values on the same date. Problem with that is that if
you group by date it
> select distinct T1.Id-- only one per Id
> from TABLE1 T1, TABLE1 T2, TABLE1 T3 -- very fast join on
indexed rowid
> where T2.Seq = T1.Seq+1 and T3.Seq = T1.Seq+2 -- consecutive
triples
> andT1.Id = T2.Id and T3.Id + T1.Id
Thanks, will give that a try as well, but I am happy with the solution form
R Smith.
Only one relatively simple SQL and quite fast. Took about 1.5 secs to
process
5 rows.
RBS
On Thu, Oct 20, 2016 at 4:55 PM, Jean-Luc Hainaut <
jean-luc.hain...@unamur.be> wrote:
>
> What about this one?
>
>
What about this one?
create table TABLE1(Seq integer primary key autoincrement,
Id integer, Date date, Value integer);
insert into TABLE1(Id,Date,Value) values(2,'2004-06-23', 42), (...), ...;
select distinct T1.Id-- only one per Id
from
> and do NOT reverse comparison operators in the final WHERE clause.
Yes, thank, that was it.
RBS
On Thu, Oct 20, 2016 at 4:40 AM, Cezary H. Noweta
wrote:
> Hello,
>
> On 2016-10-20 01:58, Bart Smissaert wrote:
>
>> I worked it round to get 3 consecutive drops:
>>
>
>
Hello,
On 2016-10-20 01:58, Bart Smissaert wrote:
I worked it round to get 3 consecutive drops:
[...]
JOIN C1 AS B ON B.ID = A.ID AND B.DT = (SELECT MAX(X.DT) FROM C1 AS X
WHERE X.ID = A.ID AND X.DT > A.DT)
JOIN C1 AS C ON C.ID = A.ID AND C.DT = (SELECT MAX(X.DT) FROM
> It has three consecutive drops, no? E.g. 47 < 46 < 37
That is 3 values, but only 2 drops.
RBS
On Thu, Oct 20, 2016 at 1:07 AM, Petite Abeille
wrote:
>
> > On Oct 20, 2016, at 1:58 AM, Bart Smissaert
> wrote:
> >
> > I worked it round to
> On Oct 20, 2016, at 1:58 AM, Bart Smissaert wrote:
>
> I worked it round to get 3 consecutive drops:
>
> but I must have done something wrong as it picked 62 up from this:
It has three consecutive drops, no? E.g. 47 < 46 < 37
Example:
with
DataSet
as
(
select
Thanks, will try that as well.
RBS
On Thu, Oct 20, 2016 at 12:16 AM, Petite Abeille
wrote:
>
> > On Oct 20, 2016, at 12:23 AM, Bart Smissaert
> wrote:
> >
> > Ah, yes, sorry, I needed actually drops and was looking at that.
>
> Just for fun,
I worked it round to get 3 consecutive drops:
WITH C1(ID, DT, V) AS (
SELECT ID, ISO8601_DATE, MIN(INT_VALUE) FROM TABLE1
GROUP BY ID, ISO8601_DATE
)
SELECT DISTINCT A.ID
FROM C1 AS A
JOIN C1 AS B ON B.ID = A.ID AND B.DT = (SELECT MAX(X.DT) FROM C1 AS X
> On Oct 20, 2016, at 12:23 AM, Bart Smissaert wrote:
>
> Ah, yes, sorry, I needed actually drops and was looking at that.
Just for fun, here is one using recursion:
with
DataSet
as
(
select 1 as id, 2 as key, '2004-06-23' as date, 42 as value union all
select 2
Ah, yes, sorry, I needed actually drops and was looking at that.
Will work it round.
RBS
On Wed, Oct 19, 2016 at 11:06 PM, Petite Abeille
wrote:
>
> > On Oct 19, 2016, at 11:40 PM, Bart Smissaert
> wrote:
> >
> > ID 2 for example shouldn't
> On Oct 19, 2016, at 11:40 PM, Bart Smissaert wrote:
>
> ID 2 for example shouldn't be selected.
Hmmm? ID 2 has two ‘raises' as per your definition: 39 > 42 > 43 and 42 > 43 >
45
2 2004-06-23 42
2 2006-12-28 39
2 2007-10-09 42
2 2007-10-24 43
2 2009-06-17 45
2
Looks very nice and a lot faster, but I get more ID's than I should.
With this data for example:
ID ISO8601_DATE INT_VALUE
2 2004-06-23 42
2 2006-12-28 39
2 2007-10-09 42
2 2007-10-24 43
2 2009-06-17 45
2 2015-09-09 36
3 2004-05-05 46
3 2005-05-24 43
3
On 2016/10/19 4:53 PM, Bart Smissaert wrote:
Have a table like this:
create table TABLE1([ID] INTEGER, [ISO8601_DATE] TEXT, [INT_VALUE] INTEGER)
with data like this:
ID ISO8601_dateINT_VALUE
1 2016-01-01 10
1
Thanks for that, trying this out now. Must have done something wrong as for
now I get too many ID's. My actual fields are somewhat different. It is
indeed slow, something like 150 secs to process a table of 10 rows.
This is with select distinct ID etc.
To do this in code will be indeed a lot
Write a procedural program... ;)
In SQL it requires a triple self join similar to:
Select t1.id from table1 t1, table1 t2, table1 t3 where t2.id=t1.d and t1.value
< t2.value and t2.date = (select min(date) from table1 where id =t.id and date
> t1.date) and t3.id=t2.id and t2.value < t3.value
18 matches
Mail list logo