Tom Lane wrote:
> Gene <[EMAIL PROTECTED]> writes:
>> I have a table that inserts lots of rows (million+ per day) int8 as primary
>> key, and I cluster by a timestamp which is approximately the timestamp of
>> the insert...
> ISTM you should hardly need to worry about clustering that --- the data
> will be in timestamp order pretty naturally.
In my case my biggest/slowest tables are clustered by zip-code (which
does a reasonable job at keeping counties/cities/etc on the
same pages too). Data comes in constantly (many records per minute, as
we ramp up), pretty uniformly across the country; but most queries
are geographically bounded. The data's pretty much insert-only.
If I understand Heikki's patch, it would help for this use case.
> Your best bet might be to partition the table into two subtables, one
> with "stable" data and one with the fresh data, and transfer rows from
> one to the other once they get stable. Storage density in the "fresh"
> part would be poor, but it should be small enough you don't care.
Hmm... that should work well for me too. Not sure if the use-case
I mentioned above is still compelling anymore; since this seems like
it'd give me much of the benefit; and I don't need an excessive
fillfactor on the stable part of the table.
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster