Peter Grandi wrote:
In general, I'd use RAID10 (http://WWW.BAARF.com/), RAID5 in
Interesting movement. What do you think is their stance on Raid Fix? :)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Oliver Martin wrote:
Interesting. I'm seeing a 20% performance drop too, with default RAID
and LVM chunk sizes of 64K and 4M, respectively. Since 64K divides 4M
evenly, I'd think there shouldn't be such a big performance penalty.
I am no expert, but as far as I have read you must not only
Janek Kozicki wrote:
writing on raid10 is supposed to be half the speed of reading. That's
because it must write to both mirrors.
I am not 100% certain about the following rules, but afaik any raid
configuration has a theoretical[1] maximum read speed of the combined speed of
all disks in
Marcin Krol wrote:
Tuesday 05 February 2008 21:12:32 Neil Brown napisał(a):
% mdadm --zero-superblock /dev/sdb1
mdadm: Couldn't open /dev/sdb1 for write - not zeroing
That's weird.
Why can't it open it?
Hell if I know. First time I see such a thing.
Maybe you aren't running as root (The
Bill Davidsen wrote:
Richard Scobie wrote:
A followup for the archives:
I found this document very useful:
http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
After modifying my grub.conf to refer to (hd0,0), reinstalling grub on
hdc with:
grub device (hd0) /dev/hdc
Moshe Yudkowsky wrote:
over the other. For example, I've now learned that if I want to set up a
RAID1 /boot, it must actually be 1.2 or grub won't be able to read it.
(I would therefore argue that if the new version ever becomes default,
then the default sub-version ought to be 1.2.)
In the
Peter Rabbitson wrote:
Moshe Yudkowsky wrote:
Here's a baseline question: if I create a RAID10 array using default
settings, what do I get? I thought I was getting RAID1+0; am I really?
Maybe you are, depending on your settings, but this is beyond the point.
No matter what 1+0 you have
Lars Schimmer wrote:
Hi!
Due to a very bad idea/error, I zeroed my first GB of /dev/md0.
Now fdisk doesn't find any disk on /dev/md0.
Any idea on how to recover?
It largely depends on what is /dev/md0, and what was on /dev/md0. Provide very
detailed info:
* Was the MD device partitioned?
Lars Schimmer wrote:
I activate the backup right now - was OpenAFS with some RW volumes -
fairly easy to backup, but...
If it's hard to recover raid data, I recreate the raid and forget the
old data on it.
It is not that hard to recover the raid itself, however the ext3 on top is
most likely
Michael Tokarev wrote:
With 5-drive linux raid10:
A B C D E
0 0 1 1 2
2 3 3 4 4
5 5 6 6 7
7 8 8 9 9
10 10 11 11 12
...
AB can't be removed - 0, 5. AC CAN be removed, as
are AD. But not AE - losing 2 and 7. And so on.
I stand corrected by Michael,
Keld Jørn Simonsen wrote:
On Wed, Jan 30, 2008 at 03:47:30PM +0100, Peter Rabbitson wrote:
Michael Tokarev wrote:
With 5-drive linux raid10:
A B C D E
0 0 1 1 2
2 3 3 4 4
5 5 6 6 7
7 8 8 9 9
10 10 11 11 12
...
AB can't be removed - 0, 5. AC CAN be removed
Tim Southerwood wrote:
David Greaves wrote:
IIRC Doug Leford did some digging wrt lilo + grub and found that 1.1 and 1.2
wouldn't work with them. I'd have to review the thread though...
David
-
For what it's worth, that was my finding too. -e 0.9+1.0 are fine with
GRUB, but 1.1 an 1.2
Russell Coker wrote:
Are there plans for supporting a NVRAM write-back cache with Linux software
RAID?
AFAIK even today you can place the bitmap in an external file residing on a
file system which in turn can reside on the nvram...
Peter
-
To unsubscribe from this list: send the line
Moshe Yudkowsky wrote:
One of the puzzling things about this is that I conceive of RAID10 as
two RAID1 pairs, with RAID0 on top of to join them into a large drive.
However, when I use --level=10 to create my md drive, I cannot find out
which two pairs are the RAID1's: the --detail doesn't
Michael Tokarev wrote:
Raid10 IS RAID1+0 ;)
It's just that linux raid10 driver can utilize more.. interesting ways
to lay out the data.
This is misleading, and adds to the confusion existing even before linux
raid10. When you say raid10 in the hardware raid world, what do you mean?
Stripes
Moshe Yudkowsky wrote:
Here's a baseline question: if I create a RAID10 array using default
settings, what do I get? I thought I was getting RAID1+0; am I really?
Maybe you are, depending on your settings, but this is beyond the point. No
matter what 1+0 you have (linux, classic, or
Moshe Yudkowsky wrote:
Keld Jørn Simonsen wrote:
raid10 have a number of ways to do layout, namely the near, far and
offset ways, layout=n2, f2, o2 respectively.
The default layout, according to --detail, is near=2, far=1. If I
understand what's been written so far on the topic, that's
Hello,
It seems that mdadm/md do not perform proper sanity checks before adding a
component to a degraded array. If the size of the new component is just right,
the superblock information will overlap with the data area. This will happen
without any error indications in the syslog or
Neil Brown wrote:
On Monday January 28, [EMAIL PROTECTED] wrote:
Hello,
It seems that mdadm/md do not perform proper sanity checks before adding a
component to a degraded array. If the size of the new component is just right,
the superblock information will overlap with the data area. This
David Greaves wrote:
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options (google Time
to deprecate old RAID formats?) and the superblocks to emphasise the location
and data structure. Would it be good to
Hello,
I can not seem to be able to extend slightly a raid volume of mine. I issue
the command:
mdadm --grow --size=max /dev/md5
it completes and nothing happens. The kernel log is empty, however the even
counter on the drive is incremented by +3.
Here is what I have (yes I know that I am
Justin Piszcz wrote:
mdadm --create \
--verbose /dev/md3 \
--level=5 \
--raid-devices=10 \
--chunk=1024 \
--force \
--run
/dev/sd[cdefghijkl]1
Justin.
Interesting, I came up with the same results (1M chunk being superior)
with a completely different
Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top:
...
Could it be attributed to XFS itself?
Peter
Good question, by the way how much cache do
Bernd Schubert wrote:
Try to increase the read-ahead size of your lvm devices:
blockdev --setra 8192 /dev/raid10/space
or increase it at least to the same size as of your raid (blockdev
--getra /dev/mdX).
This did the trick, although I am still lagging behind the raw md device
by about 3 -
Hi,
I am about to create a large raid10 array, and I know for a fact that
all the components are identical (dd if=/dev/zero of=/dev/sdXY). Is it
safe to pass --assume-clean and spare 6 hours of reconstruction, or are
there some hidden dangers in doing so?
Thanks
Peter
-
To unsubscribe from
Hi,
This question might be better suited for the lvm mailing list, but
raid10 being rather new, I decided to ask here first. Feel free to
direct me elsewhere.
I want to use lvm on top of a raid10 array, as I need the snapshot
capability for backup purposes. The tuning and creation of the
Iustin Pop wrote:
On Wed, Jun 06, 2007 at 01:31:44PM +0200, Peter Rabbitson wrote:
Peter Rabbitson wrote:
Hi,
Is there a way to list the _number_ in addition to the name of a
problematic component? The kernel trend to move all block devices into
the sdX namespace combined with the dynamic
Gabor Gombas wrote:
On Wed, Jun 06, 2007 at 02:23:31PM +0200, Peter Rabbitson wrote:
This would not work as arrays are assembled by the kernel at boot time, at
which point there is no udev or anything else for that matter other than
/dev/sdX. And I am pretty sure my OS (debian) does
Gabor Gombas wrote:
On Wed, Jun 06, 2007 at 04:24:31PM +0200, Peter Rabbitson wrote:
So I was asking if the component _number_, which is unique to a specific
device regardless of the assembly mechanism, can be reported in case of a
failure.
So you need to write an event-handling script
Hi,
Is there a way to list the _number_ in addition to the name of a
problematic component? The kernel trend to move all block devices into
the sdX namespace combined with the dynamic name allocation renders
messages like /dev/sdc1 has problems meaningless. It would make remote
server
Tomasz Chmielewski wrote:
I have a RAID-10 setup of four 400 GB HDDs. As the data grows by several
GBs a day, I want to migrate it somehow to RAID-5 on separate disks in a
separate machine.
Which would be easy, if I didn't have to do it online, without stopping
any services.
Your
Neil Brown wrote:
On Friday May 4, [EMAIL PROTECTED] wrote:
Peter Rabbitson wrote:
Hi,
I asked this question back in march but received no answers, so here it
goes again. Is it safe to replace raid1 with raid10 where the amount of
disks is equal to the amount of far/near/offset copies? I
Neil Brown wrote:
On Monday May 7, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Friday May 4, [EMAIL PROTECTED] wrote:
Peter Rabbitson wrote:
Hi,
I asked this question back in march but received no answers, so here it
goes again. Is it safe to replace raid1 with raid10 where the amount
Bill Davidsen wrote:
Not worth a repost, since I was way over answering his question...
Erm... and now you made me curios :) Please share your thoughts if it is
not too much trouble. Thank you for your time.
Peter
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
Chris Wedgwood wrote:
snip
Also, 'dd performance' varies between the start of a disk and the end.
Typically you get better performance at the start of the disk so dd
might not be a very good benchmark here.
Hi,
Sorry for hijacking this thread, but I was actually planning to ask this
very
Hi,
I asked this question back in march but received no answers, so here it
goes again. Is it safe to replace raid1 with raid10 where the amount of
disks is equal to the amount of far/near/offset copies? I understand it
has the downside of not being a bit-by-bit mirror of a plain filesystem.
Are
dean gaudet wrote:
On Thu, 22 Mar 2007, Peter Rabbitson wrote:
dean gaudet wrote:
On Thu, 22 Mar 2007, Peter Rabbitson wrote:
Hi,
How does one determine the XFS sunit and swidth sizes for a software
raid10
with 3 copies?
mkfs.xfs uses the GET_ARRAY_INFO ioctl to get the data it needs from
Hi,
How does one determine the XFS sunit and swidth sizes for a software
raid10 with 3 copies?
Thanks
Peter
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Peter Rabbitson wrote:
I have been trying to figure out the best chunk size for raid10 before
migrating my server to it (currently raid1). I am looking at 3 offset
stripes, as I want to have two drive failure redundancy, and offset
striping is said to have the best write performance, with read
Hi,
I just tried an idea I got after fiddling with raid10 and to my dismay
it worked as I thought it will. I used two small partitions on separate
disks to create a raid1 array. Then I did dd if=/dev/md2 of=/dev/null. I
got only one of the disks reading. Nothing unexpected. Then I created a
Neil Brown wrote:
The different block sizes in the reads will make very little
difference to the results as the kernel will be doing read-ahead for
you. If you want to really test throughput at different block sizes
you need to insert random seeks.
Neil, thank you for the time and effort to
Richard Scobie wrote:
Peter Rabbitson wrote:
Is this anywhere near the top of the todo list, or for now raid10
users are bound to a maximum read speed of a two drive combination?
I have not done any testing with the md native RAID10 implementations,
so perhaps there are some other
Bill Davidsen wrote:
Peter Rabbitson wrote:
Hi,
I have been trying to figure out the best chunk size for raid10 before
By any chance did you remember to increase stripe_cache_size to match
the chunk size? If not, there you go.
At the end of /usr/src/linux/Documentation/md.txt
Neil Brown wrote:
When we write to a raid1, the data is DMAed from memory out to each
device independently, so if the memory changes between the two (or
more) DMA operations, you will get inconsistency between the devices.
Does this apply to raid 10 devices too? And in case of LVM if swap is
Neil Brown wrote:
On Tuesday March 6, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
When we write to a raid1, the data is DMAed from memory out to each
device independently, so if the memory changes between the two (or
more) DMA operations, you will get inconsistency between the devices.
Does
Hi,
I need to use a raid volume for swap, utilizing partitions from 4
physical drives I have available. From my experience I have three
options - raid5, raid10 with 2 offset chunks, and two raid 1 volumes
that are swapon-ed with equal priority. However I have a hard time
figuring out what to
The fact that you mention you are using partitions on disks that
possibly have other partions doing other things, means raw performance
will be compromised anyway.
Regards,
Richard
You know I never thought about it, but you are absolutely right. The
times at which my memory usage
On Thu, Mar 01, 2007 at 06:12:32PM -0500, Bill Davidsen wrote:
I have three drives, with some various partitions, currently set up like
this.
drive0drive1drive2
hdb1 hdi1 hdk1
\_RAID1/
hdb2 hdi2 hdk2
unused \___RAID0/
Hi,
I think I've hit a reproducible bug in the raid 10 driver, tried on two
different machines with kernels 2.6.20 and 2.6.18. This is a script to
simulate the problem:
==
#!/bin/bash
modprobe loop
for ID in 1 2 3 ; do
echo -n Creating loopback device $ID...
dd
After I sent the message I received the 6 patches from Neil Brown. I
applied the first one (Fix Raid10 recovery problem) and it seems to be
taking care of the issue I am describing. Probably due to the rounding
fixes.
Thanks
-
To unsubscribe from this list: send the line unsubscribe
50 matches
Mail list logo