On 01/06/2010 21:54, Tim Nufire wrote:
> Steve,
>
> Thanks for the data point :-)
>
> I'm much more concerned with why my journal is failing with rc=-231 than I am 
> with why the full fsck takes a long time... I'm running on very inexpensive 
> hardware and don't get anywhere near the disk performance Steve and Sandon 
> get. I'm luck to get 90 MB/sec sequential reads/writes. This is by design 
> since our application is very cost sensitive and higher performance is not 
> necessary. If you're curious about our hardware setup, checkout the following 
> blog post:
>
>    

No problem, yes, I was getting much more than 90MB/s here.   More on the 
line of 400MiB/s+.    I never received the RC=-231 so can't comment on that.
>> I don't have it any more as I ran into a problem with JFS at 32TiB where it 
>> would not check/mount/handle 32TiB+ volumes so had to switch over to XFS.
>>      
>
> Did JFS completely fail at 32 TiB or were you seeing journal problems like 
> I'm seeing? Have you been happy with XFS?
>    

You could create the 32TiB array (would pass mkfs.jfs) however you were 
not able to mount it.   Likewise, if you had a smaller partition <32TiB 
and then attempted to grow it beyond 32TiB the process would fail.   
Also If you tried to fsck (freshly created or in the grow scenario) that 
would also fail.

XFS is 'ok', only went with it as it was the only FS that was shown to 
be able to handle >32TiB file systems.   though I plan to really test 
when I reach the next power of 2 (64TiB or 128TiB as that seems to be 
where many of the breaking points are in the various FS code (ext3 at 
8TiB, et al).   From testing it does not seem to be as recoverable as 
JFS or EXT3 was with untoward events.   I've lost file systems more 
often with it (power failure; xfs_fsr problem causing kernel hangs 
leaving file system in non recoverable state; et al).   It does still 
require much more memory than JFS does for file system checks.   The 
'rule of thumb' of 1GiB/TiB + 1GiB/1,000,000 inodes is not too far off.

I'm honestly really looking forward to BTRFS so hopefully XFS will be ok 
for the next 5 years or so (expected 256TiB growth by then) which would 
give BTRFS about a minimum of 2 years of seasoning.  (don't really like 
FS's that are that young but may not have a choice due to data integrity 
issues) with large arrays.


On 01/07/2010 09:38, Johannes Hirte wrote:
> You need really much RAM to check and repair such big XFS volumes. 2TB was too
> much to check on a 32bit machine.
>    

Everything here is 64bit and memory was not the issue.   It seems just 
that the various tools were never tested with such large volumes.

------------------------------------------------------------------------------
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to