On 01/05/2010 09:43 AM, Tim Nufire wrote:
> Hello,
>
> I'm running a multi-petabyte data farm built on top of 18TB RAID6
> volumes formatted with JFS. As you would expect, individual servers
> periodically fail due to hardware or power problems and while we have
> never lost data due to one of these events, our JFS keep failing
> resulting in 8+ hour fsck runs. In all cases I get error rc=-231 but
> can't find any references on line that would explain the problem
> and/or provide a workaround. Is this a known issue? Is there anything
> I can do to fix it? I found one other message on this mailing list
> (Subject: Fsck on every unclean shutdown; From: Sandon Van Ness)
> reporting a similar problem but I haven't seen a response yet. Sorry
> is this is a duplicate.
>
> I'm running Debian 4.0 'EtchnHalf' with a backported 2.6.26 kernel
> using software RAID6 (md) with 15 1.5 TB drives in each array. I've
> upgraded to the latest version of the jfs utilities (v1.1.14) which I
> compiled from source.
>

Wow someone basically running into the exact same issue as me and with
with the same volume size. The 8 hour fsck amazes me though, how many
inodes are you running on that thing? My fsck takes only ~10 minutes
with 6 mililion inodes (I think the inode count effects it the most). I
am also hoping for a fix for this as it can be quite annoying!

PS: multi-petabyte data farm? JFS? software raid? backblaze?

------------------------------------------------------------------------------
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to