I am using PostgreSQL to log data in my application. A number of rows are added 
periodically, but there are no updates or deletes. There are several 
applications that log to different databases.

This causes terrible disk fragmentation which again causes performance 
degradation when retrieving data from the databases. The table files are 
getting more than 50000 fragments over time (max table size about 1 GB).

The problem seems to be that PostgreSQL grows the database with only the room 
it need for the new data each time it is added. Because several applications 
are adding data to different databases, the additions are never contiguous.

I think that preallocating lumps of a given, configurable size, say 4 MB, for 
the tables would remove this problem. The max number of fragments on a 1 GB 
file would then be  250, which is no problem. Is this possible to configure in 
PostgreSQL? If not, how difficult is it to implement in the database?

Thank you,
JG

Reply via email to