On Tuesday March 1, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
Could you please confirm if there is a problem with
2.6.11-rc4-bk4-bk10
as reported, and whether it seems to be the same problem.
Ok.. are we all ready? I had applied your development patches to all my
vanilla
Hi all
I have a RAID 5 array consisting of 8 300GB Maxtor SATA drives
(6B300S0), hooked up to a Asus A8N-SLI deluxe motherboard with 4 NForce4
SATA ports and 4 SiI 3114 ports.
see [3] for a description of what I did and more details.
each single disk in the array gives a read performance
Nicola Fankhauser wrote:
see [3] for a description of what I did and more details.
Hi Nicola,
I read your description with interest.
I thought I'd try some speed tests myself but dd doesn't seem to work
the same for me (on FC3). Here's what I get:
[EMAIL PROTECTED] test]# dd if=/dev/zero
At 19.12 01/03/2005, Robin Bowes wrote:
Roberto Fichera wrote:
At 18.53 01/03/2005, Robin Bowes wrote:
[EMAIL PROTECTED] test]# dd if=/dev/zero of=/home/test/test.tmp bs=4096
count=10
10+0 records in
10+0 records out
Notice there is no timing information.
you have to use:
time dd
*** Announcement: dmraid 1.0.0.rc6 ***
dmraid 1.0.0.rc6 is available at
http://people.redhat.com:/heinzm/sw/dmraid/ in source tarball,
source rpm and i386 rpms (shared, static and dietlibc).
This release introduces support for VIA Software RAID.
dmraid (Device-Mapper Raid tool)
hi
the version of dd I used is 5.2.1 (debian testing), but does anybody
have an idea regarding my performance question?
regards
nicola
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Hi, everyone.
I have a 7 drive dpt_i2o RAID 5 with which a drive going bad coincided with
the card going bad. The card (Adaptec mumblefrotz) is going elsewhere,
while the drives are now on an aic7xxx SCSI chain set as JBOD.
If it were possible to configure an md device to have the same
It is a 7 drive array. If you use 6 of 7 drives, md will not try to
re-sync. But I have no idea if how to re-use the previous RAID data.
If you mean the previous configuration, I have it on
a writing pad and will gladly type it into a raidtab or
mdadm incantation.
mdadm doesn't look
On Wednesday March 2, [EMAIL PROTECTED] wrote:
Is there any sound reason why this is not feasible? Is it just that
someone needs to write the code to implement it?
Exactly (just needs to be implemented).
NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the
Robin Bowes wrote:
I envisage something like:
md attempts read
one disk/partition fails with a bad block
md re-calculates correct data from other disks
md writes correct data to bad disk
- disk will re-locate the bad block
Probably not that simple, since some times multiple blocks will
I think the overhead related to fixing the bad blocks would be insignificant
compared to the overhead of degraded mode.
Guy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Molle Bestefich
Sent: Tuesday, March 01, 2005 10:51 PM
To:
From: Nicola Fankhauser [EMAIL PROTECTED]
Date: Tue, Mar 01, 2005 at 08:54:25PM +0100
hi
the version of dd I used is 5.2.1 (debian testing), but does anybody
have an idea regarding my performance question?
- please group messages in the same thread, thank you.
- if there is no answer
12 matches
Mail list logo