Hello,
I am not sure if you have received my email from last week with the
results of the different combinations prescribed (it contained html code).
Anyway, I did a ro mount to check the partition and was happy to see a
lot of files intact. A few seemed destroyed, but I am not sure. I tried
a
On Thu, Dec 06, 2007 at 07:39:28PM +0300, Michael Tokarev wrote:
> What to do is to give repairfs a try for each permutation,
> but again without letting it to actually fix anything.
> Just run it in read-only mode and see which combination
> of drives gives less errors, or no fatal errors (there
>
Michael Tokarev wrote:
> It's sad that xfs refuses mount when "structure needs
> cleaning" - the best way here is to actually mount it
> and see how it looks like, instead of trying repair
> tools. Is there some option to force-mount it still
> (in readonly mode, knowing it may OOPs kernel etc)?
[Cc'd to xfs list as it contains something related]
Dragos wrote:
> Thank you.
> I want to make sure I understand.
[Some background for XFS list. The talk is about a broken linux software
raid (the reason for breakage isn't relevant anymore). The OP seems to
lost the order of drives in his arra
Thank you.
I want to make sure I understand.
1- Does it matter which permutation of drives I use for xfs_repair (as
long as it tells me that the Structure needs cleaning)? When it comes to
linux I consider myself at intermediate level, but I am a beginner when
it comes to raid and filesystem i
Dragos wrote:
> Thank you for your very fast answers.
>
> First I tried 'fsck -n' on the existing array. The answer was that If I
> wanted to check a XFS partition I should use 'xfs_check'. That seems to
> say that my array was partitioned with xfs, not reiserfs. Am I correct?
>
> Then I tried th
Thank you for your very fast answers.
First I tried 'fsck -n' on the existing array. The answer was that If I
wanted to check a XFS partition I should use 'xfs_check'. That seems to
say that my array was partitioned with xfs, not reiserfs. Am I correct?
Then I tried the different permutations
I forgot one thing.
After re-creating the array which deleted my data in the first place,
'mount' was giving me this answer:
mount: Structure needs cleaning
Thank you for your time,
Dragos
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMA
Thank you for your very fast answers.
First I tried 'fsck -n' on the existing array. The answer was that If I
wanted to check a XFS partition I should use 'xfs_check'. That seems to
say that my array was partitioned with xfs, not reiserfs. Am I correct?
Then I tried the different permutations
Bryce wrote:
[]
> mdadm -C -l5 -n5 -c128 /dev/md0 /dev/sdf1 /dev/sde1 /dev/sdg1 /dev/sdc1
> /dev/sdd1
...
> IF you don't have the configuration printout, then you're left with
> exhaustive brute force searching of the combinations
You're missing a very important point -- --assume-clean option.
F
Dragos wrote:
Hello,
I had created a raid 5 array on 3 232GB SATA drives. I had created one
partition (for /home) formatted with either xfs or reiserfs (I do not
recall).
Last week I reinstalled my box from scratch with Ubuntu 7.10, with
mdadm v. 2.6.2-1ubuntu2.
Then I made a rookie mistake: I
Neil Brown wrote:
> On Thursday November 29, [EMAIL PROTECTED] wrote:
>> 2. Do you know of any way to recover from this mistake? Or at least what
>> filesystem it was formated with.
It may not have been lost - yet.
> If you created the same array with the same devices and layout etc,
> the data
On Thursday November 29, [EMAIL PROTECTED] wrote:
> Hello,
> I had created a raid 5 array on 3 232GB SATA drives. I had created one
> partition (for /home) formatted with either xfs or reiserfs (I do not
> recall).
> Last week I reinstalled my box from scratch with Ubuntu 7.10, with mdadm
> v. 2
13 matches
Mail list logo