raid1 does not seem faster

2007-04-01 Thread Jan Engelhardt
Hello list,


normally, I'd think that combining drives into a raid1 array would give 
me at least a little improvement in read speed. In my setup however, 
this does not seem to be the case.

14:16 opteron:/var/log # hdparm -t /dev/sda
 Timing buffered disk reads:  170 MB in  3.01 seconds =  56.52 MB/sec
14:17 opteron:/var/log # hdparm -t /dev/md3
 Timing buffered disk reads:  170 MB in  3.01 seconds =  56.45 MB/sec
(and dd_rescue shows the same numbers)

The raid array was created using
# mdadm -C /dev/md3 -b internal -e 1.0 -l 1 -n 2 /dev/sd[ab]3


Jan
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid1 does not seem faster

2007-04-01 Thread Al Boldi
Jan Engelhardt wrote:
 normally, I'd think that combining drives into a raid1 array would give
 me at least a little improvement in read speed. In my setup however,
 this does not seem to be the case.

 14:16 opteron:/var/log # hdparm -t /dev/sda
  Timing buffered disk reads:  170 MB in  3.01 seconds =  56.52 MB/sec
 14:17 opteron:/var/log # hdparm -t /dev/md3
  Timing buffered disk reads:  170 MB in  3.01 seconds =  56.45 MB/sec
 (and dd_rescue shows the same numbers)

The problem is that raid1 one doesn't do striped reads, but rather uses 
read-balancing per proc.  Try your test with parallel reads; it should be 
faster.

You could use raid10, but then you loose single-disk-image compatibility.


Thanks!

--
Al
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid1 does not seem faster

2007-04-01 Thread Henrik Holst
On Sun, 2007-04-01 at 14:19 +0200, Jan Engelhardt wrote:
 Hello list,
 
 
 normally, I'd think that combining drives into a raid1 array would give 
 me at least a little improvement in read speed. In my setup however, 
 this does not seem to be the case.
 
 14:16 opteron:/var/log # hdparm -t /dev/sda
  Timing buffered disk reads:  170 MB in  3.01 seconds =  56.52 MB/sec
 14:17 opteron:/var/log # hdparm -t /dev/md3
  Timing buffered disk reads:  170 MB in  3.01 seconds =  56.45 MB/sec
 (and dd_rescue shows the same numbers)
 
 The raid array was created using
 # mdadm -C /dev/md3 -b internal -e 1.0 -l 1 -n 2 /dev/sd[ab]3
 
 
 Jan

From section 9.5 in [FAQ]

To check out speed and performance of your RAID systems, do NOT use
hdparm. It won't do real benchmarking of the arrays. snip

I might recommend bonnie++. I think I've seen accepted benchmarks here
using bonnie++.

[FAQ] http://tldp.org/HOWTO/html_single/Software-RAID-HOWTO/#s9

-- 
Henrik Holst [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


raidtools to mdadm

2007-04-01 Thread Casey Boone
ok so i am trying to recover some data for a friend.  what i am wanting 
to do is forcibly set up /dev/mdN to be a raid0 of /dev/sda and /dev/sdb


i do not want to actually change any of the contents of these drives, 
just mount very simply as a raid0.  the raid was originally created 
using an onboard nvidia raid on the motherboard these drives used to be 
hooked to.  my friend thought he could shove them into another windows 
box (that is what he was running on them) and have windows recover the 
raid.  all this did was totally destroy the superblock on one of the two 
drives.  dmraid now wont see them as a matched pair so that is out.  the 
actual data areas of both drives seems to be intact, but unless i can 
get them into raid0 i dont know how i can recover the data.  it figures 
he gives me the drives after he makes it a notch or two more of a pain :\



now before the advent of mdadm i would use /etc/raidtab and have no 
issues setting up the raid device.



as best i can tell i am using the correct commands for what i want but i 
pretty much get nothing but errors:


[EMAIL PROTECTED]:/media# mdadm --build /dev/md1 --chunk=128 --level=0 
--raid-devices=2 /dev/sda /dev/sdb

mdadm: error opening /dev/md1: No such device or address
[EMAIL PROTECTED]:/media# mdadm --build -n 2 -c 128 -l 0 /dev/md1 /dev/sda 
/dev/sdb

mdadm: error opening /dev/md1: No such device or address


when i run those commands i do get /dev/mdN entries created, but they do 
not point to a valid block device (as tested with fdisk -l and with 
dmraid -b)



for the life of me i dont understand why anyone would put important data 
on a raid0, but that is what happened in this case.



If i have to i will drop down to an older knoppix release to get 
raidtools back, as i have never had any issues in recovering crap 
onboard raid arrays nor windows software raid arrays under it.  i am 
sure it can be done with mdadm but for the life of me i cannot seem to 
figure out exactly how.


any help on this would be greatly appreciated

Casey
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raidtools to mdadm

2007-04-01 Thread Neil Brown
On Sunday April 1, [EMAIL PROTECTED] wrote:
 as best i can tell i am using the correct commands for what i want but i 
 pretty much get nothing but errors:
 
 [EMAIL PROTECTED]:/media# mdadm --build /dev/md1 --chunk=128 --level=0 
 --raid-devices=2 /dev/sda /dev/sdb
 mdadm: error opening /dev/md1: No such device or address

No such device or address probably means that the md module is not
loaded. 
  modprobe md
or
  modprobe md_mod

and try again.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


raid10 kernel panic on sparc64

2007-04-01 Thread Jan Engelhardt
Hi,


just when I did
# mdadm -C /dev/md2 -b internal -e 1.0 -l 10 -n 4 /dev/sd[cdef]4
(created)
# mdadm -D /dev/md2
Killed

dmesg filled up with a kernel oops. A few seconds later, the box
locked solid. Since I was only in by ssh and there is not (yet) any
possibility to reset it remotely, this is all I can give right now,
the last 80x25 screen:

l4:  l5:  l6: 
l7: 0i0: f8007f218d18 i1: f8002e3d9608
i2: 0047f974 i3: 0i4: 
i5: 006e2800 i6: f80008c12a41 i7: 00526
I7: elv_next_request+0x94/0x188
Caller[005263e8]: elv_next_request+0x94/0x188
Caller[10086618]: scsi_request_fn+0x60/0x3f4 [scsi_mod]
Caller[00529b70]: __generic_unplug_device+0x34/0x3c
Caller[0052a7d4]: generic_unplug_device+0x14/0x2c
Caller[00526e48]: blk_backing_dev_unplug+0x20/0x28
Caller[004a464c]: block_sync_page+0x64/0x6c
Caller[0047f9d0]: sync_page+0x64/0x74
Caller[00677e48]: __wait_on_bit_lock+0x58/0x90
Caller[0047f86c]: __lock_page+0x54/0x5c
Caller[004802ec]: do_generic_mapping_read+0x204/0x49c
Caller[00480d68]: __generic_file_aio_read+0x120/0x18c
Caller[00481fdc]: generic_file_read+0x70/0x94
Caller[004a3920]: vfs_read+0xa0/0x14c
Caller[004a3c8c]: sys_read+0x34/0x60
Caller[00406c54]: linux_sparc_syscall32+0x3c/0x40
Caller[0003c6b4]: 0x3c6bc
Instruction DUMP: 921022bd  7c0e4ea2  90122098 91d02005 80a0a020  1848000c
80 [10281cdc] sync_request+0x898/0x8e4 [raid10]
 [005f6fb4] md_do_sync+0x454/0x89c
 [005f69ec] md_thread+0x100/0x11c

Kernel is kernel-smp-2.6.16-1.2128sp4.sparc64.rpm from Aurora Corona.
Perhaps it helps, otherwise hold your breath until I reproduce it.


Thanks,
Jan
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10 kernel panic on sparc64

2007-04-01 Thread David Miller
From: Jan Engelhardt [EMAIL PROTECTED]
Date: Mon, 2 Apr 2007 02:15:57 +0200 (MEST)

 just when I did
 # mdadm -C /dev/md2 -b internal -e 1.0 -l 10 -n 4 /dev/sd[cdef]4
 (created)
 # mdadm -D /dev/md2
 Killed
 
 dmesg filled up with a kernel oops. A few seconds later, the box
 locked solid. Since I was only in by ssh and there is not (yet) any
 possibility to reset it remotely, this is all I can give right now,
 the last 80x25 screen:

Unfortunately the beginning of the OOPS is the most important part,
that says where exactly the kernel died, the rest of the log you
showed only gives half the registers and the rest of the call trace.

Please try to capture the whole thing.

Please also provide hardware type information as well, which you
should give in any bug report like this.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Desperate plea for help

2007-04-01 Thread Nevyn

I've recently had a raid 5 array fail on me - 2 drives suddenly just
up and showed a failure. Of course this means that the metadata is
stuffed. I've had a look at the howto and found the mdadm --assemble
--force command but it's not working in this case. The 2 drives show
up as spares.

A friend has suggested that perhaps I could try reconstructing the
array but suggested that I should ask the question on here first.

Is there some way I can reconstruct the array at least long enough to
back the data up?
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html