Re: modifying degraded raid 1 then re-adding other members is bad

2006-08-09 Thread Jan Engelhardt
 Why we're updating it BACKWARD in the first place?

To avoid writing to spares when it isn't needed - some people want
their spare drives to go to sleep.

That sounds a little dangerous. What if it decrements below 0?


Jan Engelhardt
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: modifying degraded raid 1 then re-adding other members is bad

2006-08-09 Thread Helge Hafting

Michael Tokarev wrote:

Why we're updating it BACKWARD in the first place?
  

Don't know this one...

Also, why, when we adding something to the array, the event counter is
checked -- should it resync regardless?

If you remove a drive and then add it back with
no changes in the meantime, then you don't want
a resync to happen.  Some people reboot their machine
every day (too much noise, heat or electricity at night),
a daily resync is excessive.

An which drive would you consider
the master copy anyway, if the event counts match?

Helge Hafting

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


resize new raid problem

2006-08-09 Thread Serge Torop

Hello, all.

I need to install softw. RAID1 to working RedHat EL4.


I used rescue mode for creating RAID arrays.

what do I do?:

1. install my own initrd for normal booting
2. change the disk partition type to Linux Auroraid
on /dev/hda, dev/hdc

3. create RAID1, by using mkraid --really-force /dev/md2

I need to resize new /dev/md0 after creating, for this I use:

resize2fs /dev/md2 and see a error messge:

resize2fs 1.39
/resize2fs: relocation error: - /resize2fs: undefined symbol: ext2fs_open2

Can I resolve this problem (resize2fs bug?)?
(may be using mdadm?)


ps. this bug I found in RHEL4, on RHEL3, Redhat9 all was ok.
-- 

Serge P. Torop
St.Petersburg, Russia

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: resize new raid problem

2006-08-09 Thread Luca Berra

On Wed, Aug 09, 2006 at 01:55:56PM +0400, Serge Torop wrote:

I need to install softw. RAID1 to working RedHat EL4.

I used rescue mode for creating RAID arrays.

...

resize2fs /dev/md2 and see a error messge:

resize2fs 1.39
/resize2fs: relocation error: - /resize2fs: undefined symbol: ext2fs_open2

Can I resolve this problem (resize2fs bug?)?
(may be using mdadm?)


since you bought a commercial product from redhat you might be better
open a support call to them.
if the resize2fs binary you are using comes from EL4, that is.

L.

--
Luca Berra -- [EMAIL PROTECTED]
   Communication Media  Services S.r.l.
/\
\ / ASCII RIBBON CAMPAIGN
 XAGAINST HTML MAIL
/ \
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Resize on dirty array?

2006-08-09 Thread James Peverill


I'll try the force assemble but it sounds like I'm screwed.  It sounds 
like what happened was that two of my drives developed bad sectors in 
different places that weren't found until I accessed certain areas (in 
the case of the first failure) and did the drive rebuild (for the second 
failure).  In the future, is there a way to help prevent this?  Given 
that the bad sectors were likely on different parts of their respective 
drives, I should still have a complete copy of all the data right?  Is 
it possible to recover from a partial two-disk failure using all the disks?


It looks like I might as well cut my losses and buy new disks (I suspect 
the last two drives are near death given whats happened to their 
brethren).  If I go SATA am I better off getting 2 dual port cards or 1 
four port?


Thanks again.

James


Neil Brown wrote:

On Tuesday August 8, [EMAIL PROTECTED] wrote:
  
The resize went fine, but after re-adding the drive back into the array 
I got another fail event (on another drive) about 23% through the 
rebuild :(


Did I have to remove the bad drive before re-adding it with mdadm?  I 
think my array might be toast...





You wouldn't be able to re-add the drive without removing it first.
But why did you re-add the failed drive?  Why not add the new one? Or
maybe you did...

2 drives failed - yes - that sounds a bit like toast.
You can possible do a --force assemble without the new drive and try
to backup the data somewhere - if you have somewhere large enough.

NeilBrown


  

Any tips on where I should go now?

Thanks for the help.

James


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


  

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Resize on dirty array?

2006-08-09 Thread Martin Schröder

2006/8/9, James Peverill [EMAIL PROTECTED]:

failure).  In the future, is there a way to help prevent this?


RAID is no excuse for backups.

smartd may warn you in advance.

Best
  Martin

PS: http://en.wikipedia.org/wiki/Top-posting#Top-posting
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Resize on dirty array?

2006-08-09 Thread Mark Hahn

failure).  In the future, is there a way to help prevent this?


sure; periodic scans (perhaps smartctl) of your disks would prevent it.
I suspect that throttling the rebuild rate is also often a good idea
if there's any question about disk reliability.


RAID is no excuse for backups.


I wish people would quit saying this: not only is it not helpful,
but it's also wrong.  a traditional backup is nothing more than a 
strangely async raid1, with the same space inefficiency.  tape is 
not the answer, and getting more not.  the idea of a periodic snapshot
to media which is located apart and not under the same load as the 
primary copy is a good one, but not cheap or easy.  backups are also

often file-based, which is handy but orthogonal to being raid
(or incremental, for that matter).  and backups don't mean you can 
avoid the cold calculation of how much reliability you want to buy.

_that_ is how you should choose your storage architecture...

regards, mark hahn.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Resize on dirty array?

2006-08-09 Thread Henrik Holst
James Peverill wrote:

 I'll try the force assemble but it sounds like I'm screwed.  It 
 sounds like what happened was that two of my drives developed bad 
 sectors in different places that weren't found until I accessed 
 certain areas (in the case of the first failure) and did the drive 
 rebuild (for the second failure).

The file /sys/block/mdX/md/sync_action can be used to issue a recheck of
the data. Read Documentation/md.txt in kernel source for details about
the exact procedure. My advice (if you still want to continue using
software raid) is that you run such a check before any add/grow or other
action in the future. Also, if the raid has been unused for a long while
it might be a good idea to recheck the data.

[snip]

I feel your pain. Massive data loss is the worst. I have had my share of
crashes. Once due to bad disk and no redundancy, the other time due to
good old stupidity.

Henrik Holst
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: modifying degraded raid 1 then re-adding other members is bad

2006-08-09 Thread Neil Brown
On Wednesday August 9, [EMAIL PROTECTED] wrote:
  Why we're updating it BACKWARD in the first place?
 
 To avoid writing to spares when it isn't needed - some people want
 their spare drives to go to sleep.
 
 That sounds a little dangerous. What if it decrements below 0?

It cannot.
md  decrements the event count only on a dirty-clean transition, and
only if it had previously incremented the count on a clean-dirty
transition.  So it can never go below what it was when the array was
assembled.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


How does linux soft-raid resolve the inconsistency after a system crash?

2006-08-09 Thread liyiming


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html