On Friday 08 September 2006 07:08, Ronan KERYELL wrote: > First I would say it is possible to mkfs the disk before each new usage to > have clean data structures with less overhead (no fragmentation...).
Not really necessary; on any modern filesystem (and a few very old ones), emptying the filesystem will clear any fragmentation that might have appeared. > Secondly you could choose a file system optimized for big files and > write-ahead only. It s possible to change the parameters of the FS to push > even more this behaviour (how many cylinders? block size? no logging on > the data, no block reserve for fast allocation...). Well, there's no such thing as write-ahead (the kernel will guess the data you will write? :o) but as for big files, the best thing you can do at the FS layer is to use a large block size and no data journaling. Setting reserved blocks to zero is a good idea, as is using O_DIRECT (as discussed elsewhere). > Third, what about bad blocks on disk? How to skip them in a raw partition > if you do not have state-of-the-art disks that do block remapping for you > in your back-yard (such as SCSI)? Often FS do these tricks for you on > IDE disks for example. Irrelevant. All modern drives (IDE included) since MFM have done automatic internal remapping. > Well, IMHO, I would vote for a FS solution except if I have a real > gain... :-) As would I. -- Forums for Amanda discussion: http://forums.zmanda.com/
