"Craig A. James" <[EMAIL PROTECTED]> writes:

> More specifically, this problem was solved on UNIX file systems way back in 
> the
> 1970's and 1980's. No UNIX file system (including Linux) since then has had
> significant fragmentation problems, unless the file system gets close to 100%
> full. If you run below 90% full, fragmentation shouldn't ever be a significant
> performance problem.

Note that the main technique used to avoid fragmentation -- paradoxically --
is to break the file up into reasonable sized chunks. This allows the
filesystem the flexibility to place the chunks efficiently.

In the case of a performance-critical file like the WAL that's always read
sequentially it may be to our advantage to defeat this technique and force it
to be allocated sequentially. I'm not sure whether any filesystems provide any
option to do so.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com


---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to