I am running Postgre 7.4 on FreeBSD. The main table have 2 million
record (we would like to do at least 10 mil or more). It is mainly a
FIFO structure with maybe 200,000 new records coming in each day that
displace the older records.
We have a GUI that let user browser through the record page by page at
about 25 records a time. (Don't ask me why but we have to have this
GUI). This translates to something like
select count(*) from table <-- to give feedback about the DB size
select * from table order by date limit 25 offset 0
Tables seems properly indexed, with vacuum and analyze ran regularly. Still this very basic SQLs takes up to a minute run.
I read some recent messages that select count(*) would need a table
scan for Postgre. That's disappointing. But I can accept an
approximation if there are some way to do so. But how can I optimize
select * from table order by date limit x offset y? One minute response
time is not acceptable.
Any help would be appriciated.
Wy
- [PERFORM] browsing table with 2 million records aurora
- Re: [PERFORM] browsing table with 2 million r... Mark Lewis
- Re: [PERFORM] browsing table with 2 million r... Scott Marlowe
- Re: [PERFORM] browsing table with 2 million r... Joshua D. Drake
- Re: [PERFORM] browsing table with 2 milli... Alex Turner
- Re: [PERFORM] browsing table with 2 million r... PFC
- Re: [PERFORM] browsing table with 2 milli... Christopher Kings-Lynne
- Re: [PERFORM] browsing table with 2 million r... Christopher Kings-Lynne
- Re: [PERFORM] browsing table with 2 million r... Merlin Moncure