On 6/28/05, John A Meinel <[EMAIL PROTECTED]> wrote:
> Actually, if you already have a lower bound, then you can change it to:
> SELECT t1.id+1 as id_new FROM id_test t1
> WHERE t1.id > id_min
> AND NOT EXISTS
> (SELECT t2.id FROM id_test t2 WHERE t2.id = t1.id+1)
> ORDER BY t1.id LIMIT 1;
> This would actually really help performance if you have a large table
> and then empty entries start late.
You can also boost performance by creating a functional index!
CREATE UNIQUE INDEX id_test_id1_index ON id_test ((id+1));
...and then joining two tables and filtering results. PostgreSQL (8.x)
will do Merge Full Join which will use both the indexes:
SELECT t2.id+1 FROM id_test t1 FULL OUTER JOIN id_test t2 ON (t1.id =
t2.id+1) WHERE t1.id IS NULL LIMIT 1;
Limit (cost=0.00..1.52 rows=1 width=4)
-> Merge Full Join (cost=0.00..1523122.73 rows=999974 width=4)
Merge Cond: ("outer".id = ("inner".id + 1))
Filter: ("outer".id IS NULL)
-> Index Scan using id_test_pkey on id_test t1
(cost=0.00..18455.71 rows=999974 width=4)
-> Index Scan using id_test_id1_index on id_test t2
(cost=0.00..1482167.60 rows=999974 width=4)
...the only drawback is having to keep two indexes instead of just one.
But for large tables I think it is really worth it
For my test case, the times are (1-1000000 range with 26 missing
NOT EXISTS -- 670ms
NOT IN -- 1800ms
indexed FULL OUTER -- 267ms
PS: Does it qualify for General Bits? ;-)))
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?