; 0) or must be read from the devices
(raid_disks == 0).
However for an array without persistent metadata (or with externally
managed metadata) this is the wrong thing to do. So we add a test in
do_md_run to give an error if raid_disks is zero for non-persistent
arrays.
This requires that md
Daniel L. Miller said: (by the date of Thu, 25 Oct 2007 16:32:31 -0700)
> Thanks for the test responses - I have re-subscribed...if I see this
> myself...I'm back!
I know that gmail doesn't allow to see your own posts on mailing
lists. Only posts from other people. Maybe yo
Success.
On Thu, 25 Oct 2007, Daniel L. Miller wrote:
Sorry for consuming bandwidth - but all of a sudden I'm not seeing messages.
Is this going through?
--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majord
Success 2.
On Thu, 25 Oct 2007, Daniel L. Miller wrote:
Thanks for the test responses - I have re-subscribed...if I see this
myself...I'm back!
--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More
Thanks for the test responses - I have re-subscribed...if I see this
myself...I'm back!
--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sorry for consuming bandwidth - but all of a sudden I'm not seeing
messages. Is this going through?
--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
* when there was no such activity.
+* non-sync IO will cause disk_stat to increase without
+* increasing sync_io so curr_events will (eventually)
+* be larger than it was before. Once it becomes
+ * substanti
e smaller than
> * when there was no such activity.
> * non-sync IO will cause disk_stat to increase without
> * increasing sync_io so curr_events will (eventually)
> * be larger than it was before. Once it becomes
> * substantially larger, the test below will cause
> * the array to appe
ing sync_io so curr_events will (eventually)
* be larger than it was before. Once it becomes
* substantially larger, the test below will cause
* the array to appear non-idle, and resync will slow
* down.
On May 10 2007 16:22, NeilBrown wrote:
>
>diff .prev/drivers/md/md.c ./drivers/md/md.c
>--- .prev/drivers/md/md.c 2007-05-10 15:51:54.0 +1000
>+++ ./drivers/md/md.c 2007-05-10 16:05:10.0 +1000
>@@ -5095,7 +5095,7 @@ static int is_mddev_idle(mddev_t *mddev)
>*
On Thursday May 10, [EMAIL PROTECTED] wrote:
> On Thu, 10 May 2007 16:22:31 +1000 NeilBrown <[EMAIL PROTECTED]> wrote:
>
> > The test currently looks for any (non-fuzz) difference, either
> > positive or negative. This clearly is not needed. Any non-sync
> >
On Thu, 10 May 2007 16:22:31 +1000 NeilBrown <[EMAIL PROTECTED]> wrote:
> The test currently looks for any (non-fuzz) difference, either
> positive or negative. This clearly is not needed. Any non-sync
> activity will cause the total sectors to grow faster than the sync_io
> c
During a 'resync' or similar activity, md checks if the devices in the
array are otherwise active and winds back resync activity when they
are. This test in done in is_mddev_idle, and it is somewhat fragile -
it sometimes thinks there is non-sync io when there isn't.
The test com
We need to check for internal-consistency of superblock in
load_super. validate_super is for inter-device consistency.
With the test in the wrong place, a badly created array will confuse md
rather an produce sensible errors.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat
Tomka Gergely wrote:
Hi!
I am running tests on our new test device. The device has 2x2 core Xeon,
intel 5000 chipset, two 3ware sata raid card on pcie, and 15 sata2 disks,
running debian etch. More info at the bottom.
The first phase of the test is probing various raid levels. So i
Hi!
I am running tests on our new test device. The device has 2x2 core Xeon,
intel 5000 chipset, two 3ware sata raid card on pcie, and 15 sata2 disks,
running debian etch. More info at the bottom.
The first phase of the test is probing various raid levels. So i
configured the cards to 15 JBOD
I'm trying to test the status of a raid device using mdadm:
# mdadm --misc --detail --test /dev/md0
However this does not appear to work as documented. As I read the man
page, the return code is supposed to reflect the status of the raid
device:
"
MISC MODE
...
On Monday August 28, [EMAIL PROTECTED] wrote:
> Neil Brown wrote:
> > On Saturday August 26, [EMAIL PROTECTED] wrote:
> >> All,
> >>
> >> [...]
> >>
> >> * Problem 1: Since moving from 2.4 -> 2.6 kernel, a reboot kicks one
> >> device out of the array (c.f. post by Andreas Pelzner on 24th Aug 2006
James Brown wrote:
[...]
There is no mdadm/mdadm.conf! What I should do about this?
Having just read the post from Andreas Pelzner, perhaps I should create
a new initrd:
> Andreas Pelzner wrote:
you told me the rigt way. I had to add the lines "raid1" and "md_mod" to
/etc/mkinitrd/modules.
Neil Brown wrote:
On Saturday August 26, [EMAIL PROTECTED] wrote:
All,
[...]
* Problem 1: Since moving from 2.4 -> 2.6 kernel, a reboot kicks one
device out of the array (c.f. post by Andreas Pelzner on 24th Aug 2006).
* Problem 2: When booting my system, unless both disks plugged in, I get
On Saturday August 26, [EMAIL PROTECTED] wrote:
> All,
>
> I'm fairly new to Linux/Debian and have been trying to configure mdadm
> for RAID1 with 2x120Gb IDE disks. Unfortunately, I have two problems
> with the configuration and would really appreciate some advice.
>
> * Problem 1: Since movin
All,
I'm fairly new to Linux/Debian and have been trying to configure mdadm
for RAID1 with 2x120Gb IDE disks. Unfortunately, I have two problems
with the configuration and would really appreciate some advice.
* Problem 1: Since moving from 2.4 -> 2.6 kernel, a reboot kicks one
device out of
On Tuesday July 11, [EMAIL PROTECTED] wrote:
> Christian Pernegger wrote:
> > The fact that the disk had changed minor numbers after it was plugged
> > back in bugs me a bit. (was sdc before, sde after). Additionally udev
> > removed the sdc device file, so I had to manually recreate it to be
> > a
Christian Pernegger wrote:
I finally got around to testing 2.6.17.4 with libata-tj-stable-20060710.
Hardware: ICH7R in ahci mode + WD5000YS's.
EH: much, much better. Before the patch it seemed like errors were
only printed to dmesg but never handed up to any layer above. Now md
actually fails
Christian Pernegger wrote:
The fact that the disk had changed minor numbers after it was plugged
back in bugs me a bit. (was sdc before, sde after). Additionally udev
removed the sdc device file, so I had to manually recreate it to be
able to remove the 'faulty' disk from its md array.
That's b
I finally got around to testing 2.6.17.4 with libata-tj-stable-20060710.
Hardware: ICH7R in ahci mode + WD5000YS's.
EH: much, much better. Before the patch it seemed like errors were
only printed to dmesg but never handed up to any layer above. Now md
actually fails the disk when I pull the (pow
We should be able to write 'repair' to /sys/block/mdX/md/sync_action,
however due to and inverted test, that always given EINVAL.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/md.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff .
perl offends you,
sorry, I'm quicker at it than C by a long-shot, and I don't really care
about speed here, just speed of development.
Here's the shell script I'm using as a test harness. It creates a
loopback raid5 system, fills it up with random data, and then takes the
md5s
sorry for the spam, not sure if I'm subscribed or not
*ducks..*
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
.2 1999/02/10 10:58:04 rich
# Use cp instead of tar to copy.
#
# Revision 1.1 1999/02/09 15:13:38 rich
# Added first version of stress test program.
#
# Stress-test a file system by doing multiple
# parallel disk operations. This does everything
# in MOUNTPOINT/stress.
nconcurrent=4
conte
hi ya
best way to test raid5 is to write large ( 1Gb-2Gb ) data files to it...
and than compare the files
-- oooppss... just re-read david's post skip the part about
powering down the disks..etc...
than pull one of the disks offline
and see if it still compares...
ins
David Christensen wrote:
> I've recently setup a new RAID-5 configuration and wanted to test it
> thoroughly before I commit data to it. I'm not so worried about drive
> failures so I don't want to power down drives while the system is running,
> but I do want to tes
On Fri, 16 Mar 2001, David Christensen wrote:
> I've recently setup a new RAID-5 configuration and wanted to test it
> thoroughly before I commit data to it. I'm not so worried about drive
> failures so I don't want to power down drives while the system is running,
>
I've recently setup a new RAID-5 configuration and wanted to test it
thoroughly before I commit data to it. I'm not so worried about drive
failures so I don't want to power down drives while the system is running,
but I do want to test the drives out by reading/writing/verifying
Scott Sherman
Systems Administrator
design net
Tel: +44(0)870 240 0088
Fax: +44(0)870 240 0099
Email: [EMAIL PROTECTED]
winmail.dat
activity test
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
test
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
37 matches
Mail list logo