I have a web page for my customers that shows them count of records and some min/max date ranges in each table of a database, as this is how we bill them for service. They can log in and check the counts at any time. I'd like for the counts to be as fresh as possible by keeping this dynamic, but I will use a periodic 'snapshot'/cron job if that is the only option to speed this up. I have thought about using the table statistics, but the estimate error is probably unacceptable because of the billing purposes.

For some reason, the SQL Server we migrated the app from can return count(*) in a split second on multi-million row tables, even though it is a MUCH slower box hardware-wise, but it's now taking many seconds to run. I have read in the archives the problems MVCC brings into the count(*) dilemma forcing Pg to run a seq scan to get counts. Does SQLServer not use MVCC or have they found another approach for arriving at this number? Compounding all the min/max and counts from other tables and all those queries take about a minute to run. The tables will contain anywhere from 1 million to 40 million rows.

Also, I am using "select ... group by ... order by .. limit 1" to get the min/max since I have already been bit by the issue of min() max() being slower.


-Dan



---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to