On 2011-05-11 01:54, Greg Stark wrote:
To be fair about 3/4 of them were actually complaining about the lack
of some global materialized cache of the aggregate value. Covering
index-only scans are only going to be a linear speedup no matter how
large the factor it's not going to turn select count(*) into a O(1)
Actually, if we could get to count(*) into the situation of a
very "thin row" today, so the cost of visibillity-testing didn't depend
hugely on the width of the row any more, then we be half-
way-there in terms of performance optimization.

If rows typically just were tuple-headers + a bit more, then it
would be way harder to go down this road and claim good
benefits. But currently the system needs to drag in "allmost"
one page pr. visibillity test from disk on "random large tables".

I tried to graph the differences of thin vs. wide rows here:
http://shrek.*krogh*.cc/~*jesper*/*visibillitytesting*.pdf <http://shrek.krogh.cc/%7Ejesper/visibillitytesting.pdf>

The getting the visibillitymap down to an O(n) is "on large tables"
shifting to be memory based vs. disk-based as now.

Jesper (It not a goal, but it would most-likely postpone some
peoples needs for buying a FusionIO card or similar)

Reply via email to