Hi all,
I just ran a Linux software RAID-1 benchmark with some 500GB SATA
drives in NCQ mode, along with a non-RAID control. Details are here
for those interested.
http://www.jab.org/raid-bench/
Comments are appreciated. I'm curious if people are happy, sad, or
surprised by any of the
Hi Neil,
Are you suggesting I do this?
mdadm --create /dev/md0 --level=10 --raid-devices=2 \
--parity=f2 /dev/sdc1 /dev/sdd1
I just tried it and it appears dog slow - for example
hdparm -t /dev/md0 claims 18MB/s, and I see a similar
number in /proc/mdstat for resync speed.
Does chunk size matter *at all* for RAID-1?
mdadm --create /dev/md0 --level=1 --chunk=8 /dev/sdc1 /dev/sdd1
mdadm --create /dev/md0 --level=1 --chunk=128 /dev/sdc1 /dev/sdd1
In my mental model of how RAID works, it can't possibly matter
what my chunk size is whether I've got 1KB files or 1GB
Hi all,
I have a two drive RAID1 serving data for a busy website. The
partition is 500GB and contains millions of 10KB files. For reference,
here's /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[0] sdd1[1]
488383936 blocks [2/2] [UU]
For backups, I set the md0 partition to
First of all, if the data is mostly static, rsync might work faster.
Any operation that stats the individual files - even to just look at
timestamps - takes about two weeks. Therefore it is hard for me to see
rsync as a viable solution, even though the data is mostly
static. About 400,000 files
On 10/24/05, Thomas Garner [EMAIL PROTECTED] wrote:
Should there be any consideration for the utilization of the gigabit
interface that is passing all of this backup traffic, as well as the
speed of the drive that is doing all of the writing during this
transaction? Is the 18MB/s how fast
Norman What you should be able to do with software raid1 is the
Norman following: Stop the raid, mount both underlying devices
Norman instead of the raid device, but of course READ ONLY. Both
Norman contain the complete data and filesystem, and in addition to
Norman that the md superblock at the
Thanks to good advice from many people, here are my findings and
conclusions.
(1) Splitting the RAID works. I have now implemented this technique on
the production system and am making a backup right now.
(2) NBD is cool, works well on Debian, and is very convenient. A
couple experiments
Hi all,
Debian is a little slow tracking mdadm, and currently ships version
1.9 in unstable. Of course, I want to try out the fancy new features
in mdadm 2.1 to match my shiny new 2.6.14 (Debian stock) Linux kernel.
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=337903
Is upgrading mdadm the
I'd prefer to buy fewer, higher-capacity drives (300+ GB). Any
experience with the new 500's?
I currently have 3 of the 500GB Hitachi's in a RAID-1 configuration
using linux software RAID. So far, so good.
In response to someone else's question, the mostly random reads are
pretty well
I'll just call it sync access pattern overhead then.
As another data point, I've been adding more and more
drives to a RAID-1 array. Yesterday I just added a fourth
disk which is still syncing.
mdadm --grow /dev/md0 -n4
mdadm --manage /dev/md0 --add /dev/sde
md0 : active raid1
The fundamental problem is that generic RS requires table lookups even
in the common case, whereas RAID-6 uses shortcuts to substantially
speed up the computation in the common case.
If one wanted to support a typical 8-bit RS code (which supports a max of
256 drives, including ECC drives) it
Interesting paper, thanks. Unfortunately, decode bandwidth
when erasures are present (e.g. drives have failed) is not
discussed. This is by far the speed bottleneck for Reed-Solomon
and a potential hangup for a RS personality in md.
Jeff
-
To unsubscribe from this list: send the line unsubscribe
Is this a real issue or ignorable Sun propoganda?
-Original Message-
From: I-Gene Leong
Subject: RE: [colo] OT: Server Hardware Recommendations
Date: Mon, 16 Jan 2006 14:10:33 -0800
There was an interesting blog entry out in relation to Sun's RAID-Z
talking about RAID-5 shortcomings:
Consider the following setup, mainly designed for reading random small
files quickly. Normally, this a quintuply redundant RAID-1.
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdg1[6] sde1[1] sdb1[4] sdd1[3] sdc1[2]
488383936 blocks [6/4] [__]
Controller: Areca ARC 1160 PCI-X 1GB Cache
Those numbers are for Arica hardware raid or linux software raid?
--Jeff
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
The only issue is the obvious one that the manufacturers are usually
fairly vague about the exact usable size of each disk. If you
bought half the disks from one manufacturer and half from another
then you should be able to pair them up and use the minimum of the
size of each pair.
In a
I'm thinking about upgrading from Linux 2.6.14 to some newer kernel -
probably to whatever is in Debian unstable. They're all basically safe
for md and RAID1, right? No gotcha kernel versions to especially avoid?
--Jeff
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
Typo in last line of this patch.
+ In unsure, say Y.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
If linux RAID-10 is still much slower than RAID-1 this discussion is kind
of moot, right?
Jeff
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Speaking of hardware disappointments, I was looking at the Norco
DS-1220. This is a rackmount 12 bay SATA enclosure. I like the price
point, but it uses SATA port multipliers. And Jeff Garzik's page says
this is not supported by libata. Any ideas when Linux might be able to
take advantage of such
Ok, so hearing all the excitement I ran a check on a multi-disk
RAID-1. One of the RAID-1 disks failed out, maybe by coincidence
but presumably due to the check. (I also have another disk in
the array deliberately removed as a backup mechanism.) And
of course there is a big mismatch count.
... and all access to array hangs indefinitely, resulting in unkillable zombie
processes. Have to hard reboot the machine. Any thoughts on the matter?
===
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sde1[6](F) sdg1[1] sdb1[4] sdd1[3] sdc1[2]
488383936 blocks [6/4]
Proposed solution is to use software raid mirror. Before backup starts,
break the soft mirror unmount and backup partition
I use this method for backup once a week.
One challenge is drives aren't great at steaming data quickly (for the resync)
while also doing a lot of random access. Having a
What you could do is set the number of devices in the array to 3 so
they it always appears to be degraded, then rotate your backup drives
through the array. The number of dirty bits in the bitmap will
steadily grow and so resyncs will take longer. Once it crosses some
threshold you set the
So the obvious follow up question is: for this scenario does it make sense to
only resync the difference between the two bitmaps? E.g. Drive A will have
a current bitmap, B will have a stale bitmap. Presumably one could get away
with just resyncing the difference.
Or is this too much of special
On Dec 14, 2007 11:13 AM, Jeff Breidenbach [EMAIL PROTECTED] wrote:
So the obvious follow up question is: for this scenario does it make sense to
only resync the difference between the two bitmaps?
Never mind, I see why this won't work.
-
To unsubscribe from this list: send the line unsubscribe
Does anyone recommend any inexpensive (probably SATA-II) PCI interface
cards?
See this message and surrounding thread from November 2007 on
the linux-ide list.
http://www.mail-archive.com/[EMAIL PROTECTED]/msg12726.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the
I'm planning to take some RAID-1 drives out of an old machine
and plop them into a new machine. Hoping that mdadm assemble
will magically work. There's no reason it shouldn't work. Right?
old [ mdadm v1.9.0 / kernel 2.6.17 / Debian Etch / x86-64 ]
new [ mdad v2.6.2 / kernel 2.6.22 / Ubuntu 7.10
It's not a RAID issue, but make sure you don't have any duplicate volume
names. According to Murphy's Law, if there are two / volumes, the wrong
one will be chosen upon your next reboot.
Thanks for the tip. Since I'm not using volumes or LVM at all, I should be
safe from this particular
Does the new machine have a RAID array already?
Yes.. the new machine already has on RAID array.
After sneakernet it should have two RAID arrays. Is
there a gotcha?
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
I just finished the transfer and it went great. Thanks for all
the advice. I went with the assemble-by-uuid approach in
/etc/mdadm.conf which did very well. Especially since drive
letters danced around quite a bit between reboots. One of the
disks died during transit, and the redundancy part of
32 matches
Mail list logo