[EMAIL PROTECTED] (James Thornton) writes:
> Back in 2001, there was a lengthy thread on the PG Hackers list about
> PG and journaling file systems
> (http://archives.postgresql.org/pgsql-hackers/2001-05/msg00017.php),
> but there was no decisive conclusion regarding what FS to use. At the
> time the fly in the XFS ointment was that deletes were slow, but this
> was improved with XFS 1.1.
> I think a journaling a FS is needed for PG data since large DBs could
> take hours to recover on a non-journaling FS, but what about WAL files?

If the WAL files are on a small filesystem, it presumably won't take
hours for that filesystem to recover at fsck time.

The results have not been totally conclusive...

 - Several have found JFS to be a bit faster than anything else on
   Linux, but some data loss problems have been experienced;

 - ext2 has the significant demerit that with big filesystems, fsck
   will "take forever" to run;

 - ext3 appears to be the slowest option out there, and there are some
   stories of filesystem corruption;

 - ReiserFS was designed to be real fast with tiny files, which is not
   the ideal "use case" for PostgreSQL; the designers there are
   definitely the most aggressive at pushing out "bleeding edge" code,
   which isn't likely the ideal;

 - XFS is neither fastest nor slowest, but there has been a lack of
   reports of "spontaneous data loss" under heavy load, which is a
   good thing.  It's not part of "official 2.4" kernels, requiring
   backports, but once 2.6 gets more widely deployed, this shouldn't
   be a demerit anymore...

I think that provides a reasonable overview of what has been seen...
output = reverse("gro.gultn" "@" "enworbbc")
Donny: Are these the Nazis, Walter?
Walter: No, Donny, these men are nihilists. There's nothing to be
afraid of.  -- The Big Lebowski

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to