On Thu, 14 Feb 2008, Michael Lorenz wrote:
When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the
following:
Limit (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276
rows=20 loops=1)
- Index Scan using account_objectname on object o
Michael Lorenz [EMAIL PROTECTED] writes:
My query is as follows:
SELECT o.objectid, o.objectname, o.isactive, o.modificationtime
FROMobject o
WHERE ( o.deleted = false OR o.deleted IS NULL )
AND o.accountid = 111
ORDER BY 2
LIMIT 20 OFFSET 1;
This is guaranteed to lose
slows after offset of 100K
Date: Thu, 14 Feb 2008 14:08:15 -0500
From: [EMAIL PROTECTED]
Michael Lorenz writes:
My query is as follows:
SELECT o.objectid, o.objectname, o.isactive, o.modificationtime
FROMobject o
WHERE ( o.deleted = false OR o.deleted IS NULL
number of records before allowing any paginated access? Or
is it just not practical, period?
Thanks,
Michael Lorenz
To: [EMAIL PROTECTED]
CC: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Query slows after offset of 100K
Date
Michael Lorenz [EMAIL PROTECTED] writes:
Fair enough, and I did think of this as well. However, I didn't think this
was a viable option in my case, since we're currently allowing the user to
randomly access the pages (so $lastkey wouldn't really have any meaning).
The user can choose to
On Thu, 14 Feb 2008, Michael Lorenz wrote:
When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the
following:
Limit (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276
rows=20 loops=1)
- Index Scan using account_objectname on object o
Greg Smith [EMAIL PROTECTED] writes:
On Thu, 14 Feb 2008, Michael Lorenz wrote:
When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to
the following:
Limit (cost=15357.06..15387.77 rows=20 width=35) (actual
time=19.235..19.276 rows=20 loops=1)
- Index Scan using