On Thu, 24 Aug 2006 17:40:56 +1000
NeilBrown <[EMAIL PROTECTED]> wrote:
>
> Following are 4 patches against 2.6.18-rc4-mm2
>
> The first 2 are bug fixes which should go in 2.6.18, and apply
> equally well to that tree as to -mm.
>
> The latter two should stay in -mm until after 2.6.18.
>
> The
Jeff Garzik wrote:
Richard Scobie wrote:
Jeff, on a slightly related note, is the driver status for the NVIDIA
as reflected on your site, correct for the new nForce 590/570 AM2
chipset?
Unfortunately I rarely have an idea about how marketing names correlate
to chipsets.
Do you have a PC
> hello,
>
> after reboot, md only binds to one mirror (/dev/hdb1).
> raid1: raid set md0 active with 1 out of 2 mirrors
>
> After adding /dev/hda1 manually 'mdadm --add /dev/md0 /dev/hda1', the
> raid seems to work well:
>
> isp:/var/log# cat /proc/mdstat
> Personalities : [raid1]
> md0 : active r
hello,
after reboot, md only binds to one mirror (/dev/hdb1).
raid1: raid set md0 active with 1 out of 2 mirrors
After adding /dev/hda1 manually 'mdadm --add /dev/md0 /dev/hda1', the
raid seems to work well:
isp:/var/log# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hda1[0] hdb1[
Mickael Marchand wrote:
so basically I don't really know what to do with my sdf3 at the moment
and fear to reboot again :o)
maybe a --re-add /dev/sdf3 could work here ? but will it survive a
reboot ?
At this point, for whatever reason, your kernel doesn't see /dev/sdf3 as
part of the array.
Ar Iau, 2006-08-24 am 07:31 -0700, ysgrifennodd Marc Perkel:
> So - the bottom line answer to my question is that unless you are
> running raid 5 and you have a high powered raid card with cache and
> battery backup that there is no significant speed increase to use
> hardware raid. For raid 0 ther
Adam Kropelin wrote:
On Thu, Aug 24, 2006 at 02:20:50PM +0100, Alan Cox wrote:
Generally speaking the channels on onboard ATA are independant with any
vaguely modern card.
Ahh, I did not know that. Does this apply to master/slave connections on
the same PATA cable as well?
No, it doesn't.
On Thu, 24 Aug 2006, Adam Kropelin wrote:
> > Generally speaking the channels on onboard ATA are independant with any
> > vaguely modern card.
>
> Ahh, I did not know that. Does this apply to master/slave connections on
> the same PATA cable as well? I know zero about PATA, but I assumed from
> th
On Thu, Aug 24, 2006 at 02:20:50PM +0100, Alan Cox wrote:
> Ar Iau, 2006-08-24 am 09:07 -0400, ysgrifennodd Adam Kropelin:
> > Jeff Garzik <[EMAIL PROTECTED]> wrote:
> > with sw RAID of course if the builder is careful to use multiple PCI
> > cards, etc. Sw RAID over your motherboard's onboard con
Ar Iau, 2006-08-24 am 09:07 -0400, ysgrifennodd Adam Kropelin:
> Jeff Garzik <[EMAIL PROTECTED]> wrote:
> with sw RAID of course if the builder is careful to use multiple PCI
> cards, etc. Sw RAID over your motherboard's onboard controllers leaves
> you vulnerable.
Generally speaking the channels
Jeff Garzik <[EMAIL PROTECTED]> wrote:
> But anyway, to help answer the question of hardware vs. software RAID, I
> wrote up a page:
>
> http://linux.yyz.us/why-software-raid.html
>
> Generally, you want software RAID unless your PCI bus (or more rarely,
> your CPU) is getting saturated.
Hi,
I am having a little fun with a raid6 array these days.
kernel : 2.6.17.10-if.1 #1 SMP Wed Aug 23 11:25:03 CEST 2006 i686
GNU/Linux
Debian sarge and backported mdadm 2.4.1
here is the initial shape of the array :
/dev/md2 /dev/sda2,/dev/sdb2(F),/dev/sdc3,/dev/sdd3,/dev/sdf3
so sdf3 is a new
2006/8/23, Richard Scobie <[EMAIL PROTECTED]>:
Has anyone had any experience or comment regarding linux RAID over ieee1394?
I've been successfully running a 4x250Gb Raid5 over ieee1394 with XFS on top.
The 4 drives are sharing the same ieee1394 bus, so the bandwidth is
awfull, because they have
Richard Scobie wrote:
Jeff Garzik wrote:
Mark Perkel wrote:
Running Linux on an AMD AM2 nVidia chip ser that supports Raid 0
striping on the motherboard. Just wondering if hardware raid (SATA2) is
going to be faster that software raid and why?
Jeff, on a slightly related note, is the driv
A recent patch broke the ability to do a
user-request check of a raid1.
This patch fixes the breakage and also moves a comment that
was dislocated by the same patch.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid1.c |7 ---
1 file changed, 4 inserti
From: Paul Clements <[EMAIL PROTECTED]>
This patch (tested against 2.6.18-rc1-mm1) adds a new sysfs interface
that allows the bitmap of an array to be dirtied. The interface is
write-only, and is used as follows:
echo "1000" > /sys/block/md2/md/bitmap
(dirty the bit for chunk 1000 [offset 0]
>From : Coywolf Qi Hunt <[EMAIL PROTECTED]>
Signed-off-by: Coywolf Qi Hunt <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid5.c |5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff .prev/drivers/md/raid5.c ./drivers/md/raid5
We need to be careful when referencing mirrors[i].rdev.
so it can disappear under us at various times.
So:
fix a couple of problem places.
comment a couple of non-problem places
move an 'atomic_add' which deferences rdev down a little
way to some where where it is sure to not be NULL.
Following are 4 patches against 2.6.18-rc4-mm2
The first 2 are bug fixes which should go in 2.6.18, and apply
equally well to that tree as to -mm.
The latter two should stay in -mm until after 2.6.18.
The second patch is maybe bigger than it absolutely needs to be as a bugfix.
If you like I can
If we
- shut down a clean array,
- restart with one (or more) drive(s) missing
- make some changes
- pause, so that they array gets marked 'clean',
the event count on the superblock of included drives
will be the same as that of the removed drives.
So adding the removed drive back in will
From: NeilBrown <[EMAIL PROTECTED]>
Instead of magic numbers (0,1,2,3) in sb_dirty, we have
some flags instead:
MD_CHANGE_DEVS
Some device state has changed requiring superblock update
on all devices.
MD_CHANGE_CLEAN
The array has transitions from 'clean' to 'dirty' or back,
requiring
Hi Andrew,
There is a bug in 2.6.18-rc which needs to be fixed before -final,
but there is a patch in -mm which is not scheduled for 2.6.18 which
touches the same code - so that patch has to change.
So:
wind back to
md-replace-magic-numbers-in-sb_dirty-with-well-defined-bit-flags.patch
dis
22 matches
Mail list logo