Jan,

> If a filesystem contains only very few big files (and nothing else) and 
> these files do not grow or shrink during normal operation and are really 
> fully allocated in the block tables, then said filesystems metadata does 
> not change and that means that the filesystem will never ever be corrupt 
> from the OS's point of view (except due to hardware failure). Plus, an 
> FSCK on a filesystem with very few huge files is fast, really *fast*. So 
> in the case of an OS crash, your system is up in no time again, no 
> matter how big your database is.

I'm not talking about problems with the host filesystem.  I'm talking about 
problems with the data file itself.   From my perspective, the length of time 
it takes to do an FSCK is inconsequential, because I do one maybe once every 
two years.  

It does you little good, though, to have the host OS reporting that the files 
are OK, when the database won't run.

>  From there the DB itself maintains it's own metadata and has control 
> with it's WAL and other mechanisms over what needs to be redone, undone 
> and turned around to get back into a consistent state.

Yes, but you've just added a significant amount to the work the DB system 
needs to do in recovery.   PostgreSQL just needs to check for, and recover 
from, issues with LSN headers and transactions.   Single-file DBs, like SQL 
Server, need to also check and audit the internal file partitioning.

In my experience (a lot of MS SQL, more MS Access than I want to talk about, 
and a little Oracle) corruption failures on single-file databases are more 
frequent than databases which depend on the host OS, and such failures are 
much more severe when the occur.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to