On 10/11/2011 12:40 PM, Justin Rickert wrote:
> I have been having problems with a jfs file system on a 12 disk raid 5
> array composed of the following drives

...

> Is there a way to fix this file system as it is approximately 20 TB and
> contains important data? What should I do other than repeatedly bang
> head against brink wall? Is there a way of preventing this from
> happening again. Yes I was using an UPS device but only has 4 hours of
> battery life currently. I don’t have a backup solution as I can’t afford
> it currently and that being said is why I have been banging my head
> against a wall repeatedly.  From what I have read online I am pretty
> screwed at this point but I just wanted to know if there is any way to
> get at least some of the data back? Should I use hardware raid instead
> or a different type of software raid? Any suggestions like what type of
> guilitine to use is appreciated.

I'm afraid it looks bad. You might be able to mount the volume read-only
and recover some of the data, but I'm not sure what you'll find. "mount
-oro /dev/md0 /data"

All those duplicate references and cross-linked blocks indicate that a
lot of inodes were likely overwritten by other data. I suspect the block
map got corrupted and the file system allocated previously-used blocks
for new data and metadata. Not something that would be fixable.

I'm really not a raid expert, so any other suggestions from the list
would be appreciated.

Shaggy

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to