|-----Original Message-----
|I understand you have to be careful with xfs. When there are 
|large number of files being used, xfs caches a lot of the 
|data. I read somewhere, that xfs under heavy load will lock up 
|sometimes and use all your memory up.
|
Maybe that was a fact years ago with immature releases pre 2.4.19 versions.
That is _NOT_ true now. XFS is probably the most stressed filesystem
on the planet. Most Hollywood studio backbones and linux workstations runs xfs.
It's only drawback is delete which is slow by design, because it has to
traverse the inodes to figure out which blocks to delete and not a direct hashed
pointer-table. 

If you turn off internal logging and set the buffers straight it performs very 
well.
(I've commented on this earlier)

The extensive caching done by xfs makes it excellent running under vmware, 
because
it minimize io-ops by writing large chunks. So it is a goodie for low end 
systems
also.

I've used the xfs_repair, xfsdump, xfsrestore and xfs_db tools extensively.
I've had lots of bad blocks, zeroed inodes and other disk failures. I've almost
everytime managed to restore most of the data on the disk. Once I had to use
an old indy irix box to mount up the disk (luckily it was a scsi) but that
was very early 2.4.x versions. 

--
MortenB


--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to