On Mon, Jan 23, 2006 at 11:26:01AM +0100, Lars Marowsky-Bree wrote:
On 2006-01-23T10:44:18, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
Besides, stacking between dm devices so far (ie, if I look how kpartx
does it, or LVM2 on top of MPIO etc, which works just fine) is via the
block
On 2006-01-23T11:38:51, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
Ok, I still didn't get that. I must be slow.
Did you implement some DM-internal stacking now to avoid the above
mentioned complexity?
Otherwise, even DM-on-DM is still stacked via the block device
abstraction...
On Mon, Jan 23, 2006 at 11:45:22AM +0100, Lars Marowsky-Bree wrote:
On 2006-01-23T11:38:51, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
Ok, I still didn't get that. I must be slow.
Did you implement some DM-internal stacking now to avoid the above
mentioned complexity?
On Mon, Jan 23, 2006 at 02:54:20PM +0200, Ville Herva wrote:
If you really want the rest of us to convert from md to lvm, you should
perhaps give some attention to thee brittle userland (scripts and and
binaries).
If you do not like the LVM userland, you might want to try the EVMS userland,
On Mon, Jan 23, 2006 at 02:54:20PM +0200, Ville Herva wrote:
On Mon, Jan 23, 2006 at 10:44:18AM +0100, you [Heinz Mauelshagen] wrote:
I use the regularly to play with md and other stuff...
Me too but for production, I want to avoid the
additional stacking overhead and complexity.
On Mon, Jan 23, 2006 at 02:54:28PM +0100, you [Heinz Mauelshagen] wrote:
It is very tedious to have to debug a production system for a few hours in
order to get the rootfs mounted after each kernel update.
The lvm error messages give almost no clue on the problem.
Worse yet,
Neil Brown wrote:
On Saturday January 21, [EMAIL PROTECTED] wrote:
On the first try I neglected to read the directions and increased the
number of devices first (which worked) and then attempted to add the
physical device (which didn't work; at least not the way I intended).
Thanks, this is
Ville Herva wrote:
PS: Speaking of debugging failing initrd init scripts; it would be nice if
the kernel gave an error message on wrong initrd format rather than silently
failing... Yes, I forgot to make the cpio with the -H newc option :-/.
LOL, yea, that one got me too when I was first
On Saturday January 21, [EMAIL PROTECTED] wrote:
NeilBrown [EMAIL PROTECTED] wrote:
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or more drives to the array and then
On Monday January 23, [EMAIL PROTECTED] wrote:
NeilBrown wrote:
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or more drives to the array and then re-laying
out all of the
Hello Neil ,
On Mon, 23 Jan 2006, Neil Brown wrote:
On Monday January 23, [EMAIL PROTECTED] wrote:
NeilBrown wrote:
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or more
On Sunday January 22, [EMAIL PROTECTED] wrote:
Hello Neil ,
On Mon, 23 Jan 2006, Neil Brown wrote:
On Monday January 23, [EMAIL PROTECTED] wrote:
NeilBrown wrote:
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement
NeilBrown [EMAIL PROTECTED] wrote:
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or more drives to the array and then re-laying
out all of the data.
I've been looking forward to
Hi,
On Fri, 20 Jan 2006, Reuben Farrelly wrote:
On 20/01/2006 11:32 a.m., Neil Brown wrote:
The in-kernel autodetection in md is purely legacy support as far as I
am concerned. md does volume detection in user space via 'mdadm'.
Hrm. puzzled look How would I then start my md0 raid-1
On 2006-01-19T21:17:12, Phillip Susi [EMAIL PROTECTED] wrote:
I am under the impression that dm is simpler/cleaner than md. That
impression very well may be wrong, but if it is simpler, then that's a
good thing.
That impression is wrong in that general form. Both have advantages and
On Fri, Jan 20, 2006 at 10:43:13AM +1100, Neil Brown wrote:
dm and md are just two different interface styles to various bits of
this. Neither is clearly better than the other, partly because
different people have different tastes.
Here's why it's great to have both: they have different
On Fri, Jan 20, 2006 at 11:53:06AM +0100, Lars Marowsky-Bree wrote:
On 2006-01-19T21:17:12, Phillip Susi [EMAIL PROTECTED] wrote:
I am under the impression that dm is simpler/cleaner than md. That
impression very well may be wrong, but if it is simpler, then that's a
good thing.
On Thu, Jan 19, 2006 at 09:17:12PM -0500, Phillip Susi wrote:
Neil Brown wrote:
Maybe the problem here is thinking of md and dm as different things.
Try just not thinking of them at all.
Think about it like this:
The linux kernel support lvm
The linux kernel support multipath
The
On Fri, Jan 20, 2006 at 10:43:13AM +1100, Neil Brown wrote:
On Thursday January 19, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
The in-kernel autodetection in md is purely legacy support as far as I
am concerned. md does volume detection in user space via 'mdadm'.
What other
On 2006-01-20T19:38:40, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
However, rewriting the RAID personalities for DM is a thing only a fool
would do without really good cause.
Thanks Lars ;)
Well, I assume you have a really good cause then, don't you? ;-)
Sincerely,
Lars
On 2006-01-20T19:36:21, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
Then 'dmraid' (or a similar tool) can use 'dm' interfaces for some
raid levels and 'md' interfaces for others.
Yes, that's possible but there's recommendations to have a native target
for dm to do RAID5, so I started to
On Fri, Jan 20, 2006 at 11:57:24PM +0100, Lars Marowsky-Bree wrote:
On 2006-01-20T19:36:21, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
Then 'dmraid' (or a similar tool) can use 'dm' interfaces for some
raid levels and 'md' interfaces for others.
Yes, that's possible but there's
On Fri, Jan 20, 2006 at 11:09:51PM +0100, Lars Marowsky-Bree wrote:
On 2006-01-20T19:38:40, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
However, rewriting the RAID personalities for DM is a thing only a fool
would do without really good cause.
Thanks Lars ;)
Well, I assume you have
On 2006-01-21T01:01:42, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
Why not provide a dm-md wrapper which could then
load/interface to all md personalities?
As we want to enrich the mapping flexibility (ie, multi-segment fine grained
mappings) of dm by adding targets as we go, a certain
On Sat, Jan 21, 2006 at 01:03:44AM +0100, Lars Marowsky-Bree wrote:
On 2006-01-21T01:01:42, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
Why not provide a dm-md wrapper which could then
load/interface to all md personalities?
As we want to enrich the mapping flexibility (ie, multi-segment
On 2006-01-21T01:08:06, Heinz Mauelshagen [EMAIL PROTECTED] wrote:
A dm-md wrapper would give you the same?
No, we'ld need to stack more complex to achieve mappings.
Think lvm2 and logical volume level raid5.
How would you not get that if you had a wrapper around md which made it
into an dm
Andre' Breiler wrote:
Hi,
On Fri, 20 Jan 2006, Reuben Farrelly wrote:
On 20/01/2006 11:32 a.m., Neil Brown wrote:
The in-kernel autodetection in md is purely legacy support as far as I
am concerned. md does volume detection in user space via 'mdadm'.
Hrm. puzzled look How
Use either for raid0 (I don't think dm has particular advantages
for md or md over dm).
I measured this a few months ago, and was surprised to find that
DM raid0 was very noticably slower than MD raid0. same machine,
same disks/controller/kernel/settings/stripe-size. I didn't try
to
On 2006-01-19T21:12:02, Jan Engelhardt [EMAIL PROTECTED] wrote:
Use md for raid1, raid5, raid6 - anything with redundancy.
Use dm for multipath, crypto, linear, LVM, snapshot
There are pairs of files that look like they would do the same thing:
raid1.c - dm-raid1.c
linear.c -
On Thursday January 19, [EMAIL PROTECTED] wrote:
I'm currently of the opinion that dm needs a raid5 and raid6 module
added, then the user land lvm tools fixed to use them, and then you
could use dm instead of md. The benefit being that dm pushes things
like volume autodetection and
I'm currently of the opinion that dm needs a raid5 and raid6 module
added, then the user land lvm tools fixed to use them, and then you
could use dm instead of md. The benefit being that dm pushes things
like volume autodetection and management out of the kernel to user space
where it
Neil Brown wrote:
The in-kernel autodetection in md is purely legacy support as far as I
am concerned. md does volume detection in user space via 'mdadm'.
What other things like were you thinking of.
Oh, I suppose that's true. Well, another thing is your new mods to
support on the fly
On Thursday January 19, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
The in-kernel autodetection in md is purely legacy support as far as I
am concerned. md does volume detection in user space via 'mdadm'.
What other things like were you thinking of.
Oh, I suppose that's true.
Neil Brown wrote:
Maybe the problem here is thinking of md and dm as different things.
Try just not thinking of them at all.
Think about it like this:
The linux kernel support lvm
The linux kernel support multipath
The linux kernel support snapshots
The linux kernel support raid0
The
On 20/01/2006 11:32 a.m., Neil Brown wrote:
On Thursday January 19, [EMAIL PROTECTED] wrote:
I'm currently of the opinion that dm needs a raid5 and raid6 module
added, then the user land lvm tools fixed to use them, and then you
could use dm instead of md. The benefit being that dm pushes
Michael Tokarev wrote (ao):
Most problematic case so far, which I described numerous times (like,
why linux raid isn't Raid really, why it can be worse than plain
disk) is when, after single sector read failure, md kicks the whole
disk off the array, and when you start resync (after replacing
On Mer, 2006-01-18 at 09:14 +0100, Sander wrote:
If the (harddisk internal) remap succeeded, the OS doesn't see the bad
sector at all I believe.
True for ATA, in the SCSI case you may be told about the remap having
occurred but its a by the way type message not an error proper.
If you (the
Sander wrote:
Michael Tokarev wrote (ao):
Most problematic case so far, which I described numerous times (like,
why linux raid isn't Raid really, why it can be worse than plain
disk) is when, after single sector read failure, md kicks the whole
disk off the array, and when you start resync
personally, I think this this useful functionality, but my personal
preference is that this would be in DM/LVM2 rather than MD. but given
Neil is the MD author/maintainer, I can see why he'd prefer to do it in
MD. :)
Why don't MD and DM merge some bits?
Jan Engelhardt
--
-
To unsubscribe
On Wednesday January 18, [EMAIL PROTECTED] wrote:
personally, I think this this useful functionality, but my personal
preference is that this would be in DM/LVM2 rather than MD. but given
Neil is the MD author/maintainer, I can see why he'd prefer to do it in
MD. :)
Why don't MD and DM
On Wednesday January 18, [EMAIL PROTECTED] wrote:
On Wed, 18 Jan 2006, John Hendrikx wrote:
I agree with the original poster though, I'd really love to see Linux
Raid take special action on sector read failures. It happens about 5-6
times a year here that a disk gets kicked out of the
On Wednesday January 18, [EMAIL PROTECTED] wrote:
I agree with the original poster though, I'd really love to see Linux
Raid take special action on sector read failures. It happens about 5-6
times a year here that a disk gets kicked out of the array for a simple
read failure. A rebuild
On Tuesday January 17, [EMAIL PROTECTED] wrote:
Hello Neil ,
On Tue, 17 Jan 2006, NeilBrown wrote:
Greetings.
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or
On Tuesday January 17, [EMAIL PROTECTED] wrote:
On Jan 17, 2006, at 06:26, Michael Tokarev wrote:
This is about code complexity/bloat. It's already complex enouth.
I rely on the stability of the linux softraid subsystem, and want
it to be reliable. Adding more features, especially
While we're at it, here's a little issue I had with RAID5 ; not really
the fault of md, but you might want to know...
I have a 5x250GB RAID5 array for home storage (digital photo, my lossless
ripped cds, etc). 1 IDE Drive ave 4 SATA Drives.
Now, turns out one of the SATA drives is a
NeilBrown wrote:
Greetings.
In line with the principle of release early, following are 5 patches
against md in 2.6.latest which implement reshaping of a raid5 array.
By this I mean adding 1 or more drives to the array and then re-laying
out all of the data.
Neil, is this online
Michael Tokarev wrote (ao):
NeilBrown wrote:
Greetings.
In line with the principle of release early, following are 5
patches against md in 2.6.latest which implement reshaping of a
raid5 array. By this I mean adding 1 or more drives to the array and
then re-laying out all of the
Sander wrote:
Michael Tokarev wrote (ao):
[]
Neil, is this online resizing/reshaping really needed? I understand
all those words means alot for marketing persons - zero downtime,
online resizing etc, but it is much safer and easier to do that stuff
'offline', on an inactive array, like raidreconf
2006/1/17, Michael Tokarev [EMAIL PROTECTED]:
Sander wrote:
This is about code complexity/bloat. It's already complex enouth.
I rely on the stability of the linux softraid subsystem, and want
it to be reliable. Adding more features, especially non-trivial
ones, does not buy you bugfree raid
On Jan 17, 2006, at 06:26, Michael Tokarev wrote:
This is about code complexity/bloat. It's already complex enouth.
I rely on the stability of the linux softraid subsystem, and want
it to be reliable. Adding more features, especially non-trivial
ones, does not buy you bugfree raid
On Tue, Jan 17, 2006 at 11:17:15AM +0300, Michael Tokarev wrote:
Neil, is this online resizing/reshaping really needed? I understand
all those words means alot for marketing persons - zero downtime,
online resizing etc, but it is much safer and easier to do that stuff
'offline', on an
Ross Vandegrift wrote:
On Tue, Jan 17, 2006 at 02:26:11PM +0300, Michael Tokarev wrote:
Raid code is already too fragile, i'm afraid simple I/O errors
(which is what we need raid for) may crash the system already, and
am waiting for the next whole system crash due to eg superblock
update error
Michael Tokarev wrote:
snip
Compare this with my statement about offline reshaper above:
separate userspace (easier to write/debug compared with kernel
space) program which operates on an inactive array (no locking
needed, no need to worry about other I/O operations going to the
array at the
On Tuesday January 17, [EMAIL PROTECTED] wrote:
I was
also under the impression that md was going to be phased out and
replaced by the device mapper.
I wonder where this sort of idea comes from
Obviously individual
54 matches
Mail list logo