Andrei:
Hi to all,
I have the following 2 examples. Now, regarding on the offset if it is
small(10) or big(>50000) what is the impact on the performance of the query?? I
noticed that if I return more data's(columns) or�if I make more joins�then the
query runs even slower if the OFFSET is bigger. How can I
somehow improve the performance on this?
Merlin:
Offset is not suitable for traversal of large data sets. Better not use it at
all!
There are many ways to deal with this problem, the two most direct being the
view approach and the cursor approach.
cursor approach:
declare report_order with hold cursor for select * from report r, order o [...]
Remember to close the cursor when you're done. Now fetch time is proportional
to the number of rows fetched, and should be very fast. The major drawback to
this approach is that cursors in postgres (currently) are always insensitive,
so that record changes after you declare the cursor from other users are not
visible to you. If this is a big deal, try the view approach.
view approach:
create view report_order as select * from report r, order o [...]
and this:
prepare fetch_from_report_order(numeric, numeric, int4) as
select * from report_order where order_id >= $1 and
(order_id > $1 or report_id > $2)
order by order_id, report_id limit $3;
fetch next 1000 records from report_order:
execute fetch_from_report_order(o, f, 1000); o and f being the last key values
you fetched (pass in zeroes to start it off).
This is not quite as fast as the cursor approach (but it will be when we get a
proper row constructor, heh), but it more flexible in that it is sensitive to
changes from other users. This is more of a 'permanent' binding whereas cursor
is a binding around a particular task.
Good luck!
Merlin
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match