> They seem to suggest RAID 0 is faster for reading than RAID 1, and I
> can't figure out why.
with R0, streaming from two disks involves no seeks;
with R1, a single stream will have to read, say 0-64K from the first disk,
and 64-128K from the second. these could happen at the same time, and
wo
Tim Moore <[EMAIL PROTECTED]> wrote:
> Andy Smith wrote:
>> Are reads from a 2 device RAID-1 twice as fast as from a single
> md14 : active raid0 sdb13[1] sda13[0]
> md13 : active raid1 sdb12[1] sda12[0]
>
> /dev/md14:
> Timing buffered disk reads: 272 MB in 3.01 seconds = 90.37 MB/sec
> /dev/
Ah, it's something to with RAID1 drives having to skip blocks that are
read from the other drives, right?
i.e., it's about the head having to move further, instead of to just the
next block? I suppose that's only important for sequential reads, and it
would be 'fixed' to some extent by the dri
Andargor wrote:
--- Max Waterman
<[EMAIL PROTECTED]> wrote:
Andargor wrote:
I haven't found a benchmark that is 100%
reliable/comparable. Of course, it all depends how
the
drive is used in production, which may have little
correlation with the benchmarks...
Indeed.
Do you think that if it
Andy Smith wrote:
> ...
For example, are *writes* to a 2 device RAID-0 approaching twice as
fast as to a single device? If not, are they any faster at all?
Are reads from a 2 device RAID-1 twice as fast as from a single
device? If there are benefits, how quickly do they degrade to
nothing as
On Tuesday January 17, [EMAIL PROTECTED] wrote:
> I'm wondering: how well does md currently make use of the fact there
> are multiple devices in the different (non-parity) RAID levels for
> optimising reading and writing?
It does the best it can. Every request from the filesystem goes
directly to
On Tuesday January 17, [EMAIL PROTECTED] wrote:
> I was
> also under the impression that md was going to be phased out and
> replaced by the device mapper.
I wonder where this sort of idea comes from
Obviously individual distr
On Tuesday January 17, [EMAIL PROTECTED] wrote:
> Neil Brown wrote:
> > In general, I think increasing the connection between the filesystem
> > and the volume manager/virtual storage is a good idea. Finding the
> > right balance is not going to be trivial. ZFS has taken one very
> > interesting
Michael Tokarev wrote:
Compare this with my statement about "offline" "reshaper" above:
separate userspace (easier to write/debug compared with kernel
space) program which operates on an inactive array (no locking
needed, no need to worry about other I/O operations going to the
array at the time
> Neil, is this online resizing/reshaping really needed? I understand
> all those words means alot for marketing persons - zero downtime,
> online resizing etc, but it is much safer and easier to do that stuff
> 'offline', on an inactive array, like raidreconf does - safer, easier,
> faster, and o
Ross Vandegrift wrote:
On Tue, Jan 17, 2006 at 02:26:11PM +0300, Michael Tokarev wrote:
Raid code is already too fragile, i'm afraid "simple" I/O errors
(which is what we need raid for) may crash the system already, and
am waiting for the next whole system crash due to eg superblock
update erro
--- Max Waterman
<[EMAIL PROTECTED]> wrote:
> Andargor wrote:
> >
> > I haven't found a benchmark that is 100%
> > reliable/comparable. Of course, it all depends how
> the
> > drive is used in production, which may have little
> > correlation with the benchmarks...
>
> Indeed.
>
> Do you thin
Neil Brown wrote:
> In general, I think increasing the connection between the filesystem
> and the volume manager/virtual storage is a good idea. Finding the
> right balance is not going to be trivial. ZFS has taken one very
> interesting approach. There are others.
>
Just out of curiosity...
On Tue, Jan 17, 2006 at 02:26:11PM +0300, Michael Tokarev wrote:
> Raid code is already too fragile, i'm afraid "simple" I/O errors
> (which is what we need raid for) may crash the system already, and
> am waiting for the next whole system crash due to eg superblock
> update error or whatnot.
I th
Hello Neil ,
On Tue, 17 Jan 2006, NeilBrown wrote:
Greetings.
In line with the principle of "release early", following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or more drives to the array and then re-laying
out all of the
> "NeilBrown" == NeilBrown <[EMAIL PROTECTED]> writes:
NeilBrown> Previously the array of disk information was included in
NeilBrown> the raid5 'conf' structure which was allocated to an
NeilBrown> appropriate size. This makes it awkward to change the size
NeilBrown> of that array. So we sp
On Tue, Jan 17, 2006 at 11:17:15AM +0300, Michael Tokarev wrote:
> Neil, is this online resizing/reshaping really needed? I understand
> all those words means alot for marketing persons - zero downtime,
> online resizing etc, but it is much safer and easier to do that stuff
> 'offline', on an inac
On Jan 17, 2006, at 06:26, Michael Tokarev wrote:
This is about code complexity/bloat. It's already complex enouth.
I rely on the stability of the linux softraid subsystem, and want
it to be reliable. Adding more features, especially non-trivial
ones, does not buy you bugfree raid subsystem
Ross Vandegrift wrote:
>On Thu, Jan 12, 2006 at 11:16:36AM +, David Greaves wrote:
>
>
>>ok, first off: a 14 device raid1 is 14 times more likely to lose *all*
>>your data than a single device.
>>
>>
>
>No, this is completely incorrect. Let A denote the event that a single
>disk has fai
I'm wondering: how well does md currently make use of the fact there
are multiple devices in the different (non-parity) RAID levels for
optimising reading and writing?
For example, are *writes* to a 2 device RAID-0 approaching twice as
fast as to a single device? If not, are they any faster at al
2006/1/17, Michael Tokarev <[EMAIL PROTECTED]>:
> Sander wrote:
> This is about code complexity/bloat. It's already complex enouth.
> I rely on the stability of the linux softraid subsystem, and want
> it to be reliable. Adding more features, especially non-trivial
> ones, does not buy you bugfree
Sander wrote:
> Michael Tokarev wrote (ao):
[]
>>Neil, is this online resizing/reshaping really needed? I understand
>>all those words means alot for marketing persons - zero downtime,
>>online resizing etc, but it is much safer and easier to do that stuff
>>'offline', on an inactive array, like ra
On Tuesday January 17, jeff@jab.org wrote:
> Is this a real issue or ignorable Sun propoganda?
Well the 'raid-5 write hole' is old news. It's been discussed on
this list several times and doesn't seem to actually stop people
getting a lot of value out of software raid5.
Nonetheless, their ra
NeilBrown wrote (ao):
> +config MD_RAID5_RESHAPE
Would this also be possible for raid6?
> + bool "Support adding drives to a raid-5 array (highly experimental)"
> + depends on MD_RAID5 && EXPERIMENTAL
> + ---help---
> + A RAID-5 set can be expanded by adding extra drives. This
>
Michael Tokarev wrote (ao):
> NeilBrown wrote:
> > Greetings.
> >
> > In line with the principle of "release early", following are 5
> > patches against md in 2.6.latest which implement reshaping of a
> > raid5 array. By this I mean adding 1 or more drives to the array and
> > then re-laying out a
2006/1/17, Zhikun Wang <[EMAIL PROTECTED]>:
> hi,
>I am a new guy in linux MD. I want to add some fuctions into md source
> code to do research. But i can not complile MD source code as modules
> properly. Every time i need to put the source code at the directory and bulid
> the whole ke
2006/1/17, Michael Tokarev <[EMAIL PROTECTED]>:
> NeilBrown wrote:
> > Greetings.
> >
> > In line with the principle of "release early", following are 5 patches
> > against md in 2.6.latest which implement reshaping of a raid5 array.
> > By this I mean adding 1 or more drives to the array and then
Andargor wrote:
--- Max Waterman
<[EMAIL PROTECTED]> wrote:
Of course, bonnie++ only works on mounted devices,
but gives me
reasonable (but not great) numbers (130MB/s) which
don't seem to vary
too much with the kernel version.
Out of curiosity, have you compared bonnie++ results
with and w
Is this a real issue or ignorable Sun propoganda?
-Original Message-
From: I-Gene Leong
Subject: RE: [colo] OT: Server Hardware Recommendations
Date: Mon, 16 Jan 2006 14:10:33 -0800
There was an interesting blog entry out in relation to Sun's RAID-Z
talking about RAID-5 shortcomings:
ht
On Mon, 16 Jan 2006, Wolfram Schlich wrote:
Hi,
I'm experiencing a problem on a 2.2.16C37_III driven Cobalt RaQ4
after I add a new 2nd disk to a RAID1.
I'm uncertain whether this is a RAID, ext2fs or even a hardware
issue, that's why I'm writing both to ext2-devel and linux-raid.
Setup:
- /de
NeilBrown wrote:
> Greetings.
>
> In line with the principle of "release early", following are 5 patches
> against md in 2.6.latest which implement reshaping of a raid5 array.
> By this I mean adding 1 or more drives to the array and then re-laying
> out all of the data.
Neil, is this online resi
31 matches
Mail list logo