- Message from [EMAIL PROTECTED] -
Date: Mon, 25 Feb 2008 00:10:07 +
From: Peter Grandi [EMAIL PROTECTED]
Reply-To: Peter Grandi [EMAIL PROTECTED]
Subject: Re: RAID5 to RAID6 reshape?
To: Linux RAID linux-raid@vger.kernel.org
On Sat, 23 Feb 2008 21:40:08 +0100
On Sat, 23 Feb 2008 21:40:08 +0100, Nagilum
[EMAIL PROTECTED] said:
[ ... ]
* Doing unaligned writes on a 13+1 or 12+2 is catastrophically
slow because of the RMW cycle. This is of course independent
of how one got to the something like 13+1 or a 12+2.
nagilum Changing a single byte in a
[ ... ]
* Suppose you have a 2+1 array which is full. Now you add a
disk and that means that almost all free space is on a single
disk. The MD subsystem has two options as to where to add
that lump of space, consider why neither is very pleasant.
No, only one, at the end of the md device
This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may
be some performance drop. Best to make sure that those chunks
are aligned together.
Interesting. I'm seeing a 20% performance drop too, with default
RAID and LVM chunk
Janek Kozicki schrieb:
hold on. This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may be some
performance drop. Best to make sure that those chunks are aligned together.
Interesting. I'm seeing a 20% performance drop too, with
On Feb 19, 2008 1:41 PM, Oliver Martin
[EMAIL PROTECTED] wrote:
Janek Kozicki schrieb:
hold on. This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may be some
performance drop. Best to make sure that those chunks are aligned
On Tue, Feb 19, 2008 at 01:52:21PM -0600, Jon Nelson wrote:
On Feb 19, 2008 1:41 PM, Oliver Martin
[EMAIL PROTECTED] wrote:
Janek Kozicki schrieb:
$ hdparm -t /dev/md0
/dev/md0:
Timing buffered disk reads: 148 MB in 3.01 seconds = 49.13 MB/sec
$ hdparm -t /dev/dm-0
- Message from [EMAIL PROTECTED] -
Date: Mon, 18 Feb 2008 19:05:02 +
From: Peter Grandi [EMAIL PROTECTED]
Reply-To: Peter Grandi [EMAIL PROTECTED]
Subject: Re: RAID5 to RAID6 reshape?
To: Linux RAID linux-raid@vger.kernel.org
On Sun, 17 Feb 2008 07:45:26 -0700
On Feb 17, 2008 10:26 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
Well, I was reading that LVM2 had a 20%-50% performance penalty,
huh? Make a benchmark. Do you really think that anyone would be using
it if there was
On 17:40, Mark Hahn wrote:
Question to other people here - what is the maximum partition size
that ext3 can handle, am I correct it 4 TB ?
8 TB. people who want to push this are probably using ext4 already.
ext3 supports up to 16T for quite some time. It works fine for me:
[EMAIL
Beolach said: (by the date of Mon, 18 Feb 2008 05:38:15 -0700)
On Feb 17, 2008 10:26 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
Well, I was reading that LVM2 had a 20%-50% performance penalty,
8 TB. people who want to push this are probably using ext4 already.
ext3 supports up to 16T for quite some time. It works fine for me:
thanks. 16 makes sense (2^32 * 4k blocks).
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
On Sun, 17 Feb 2008 07:45:26 -0700, Conway S. Smith
[EMAIL PROTECTED] said:
[ ... ]
beolach Which part isn't wise? Starting w/ a few drives w/ the
beolach intention of growing; or ending w/ a large array (IOW,
beolach are 14 drives more than I should put in 1 array expect
beolach to be safe
On Sat, 16 Feb 2008 20:58:07 -0700, Beolach
[EMAIL PROTECTED] said:
beolach [ ... ] start w/ 3 drives in RAID5, and add drives as I
beolach run low on free space, eventually to a total of 14
beolach drives (the max the case can fit).
Like for for so many other posts to this list, all that is
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700)
I'm also interested in hearing people's opinions about LVM / EVMS.
With LVM it will be possible for you to have several raid5 and raid6:
eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would
have 14 HDDs and five
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700)
Or would I be better off starting w/ 4 drives in RAID6?
oh, right - Sevrin Robstad has a good idea to solve your problem -
create raid6 with one missing member. And add this member, when you
have it, next year or such.
--
On Sun, 17 Feb 2008 14:31:22 +0100
Janek Kozicki [EMAIL PROTECTED] wrote:
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700)
I'm also interested in hearing people's opinions about LVM / EVMS.
With LVM it will be possible for you to have several raid5 and
raid6: eg: 5 HHDs
I'm also interested in hearing people's opinions about LVM / EVMS.
With LVM it will be possible for you to have several raid5 and raid6:
eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would
have 14 HDDs and five of them being extra - for safety/redundancy
purposes.
that's a
Mark Hahn said: (by the date of Sun, 17 Feb 2008 17:40:12 -0500 (EST))
I'm also interested in hearing people's opinions about LVM / EVMS.
With LVM it will be possible for you to have several raid5 and raid6:
eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would
have
On Sunday February 17, [EMAIL PROTECTED] wrote:
On Sun, 17 Feb 2008 14:31:22 +0100
Janek Kozicki [EMAIL PROTECTED] wrote:
oh, right - Sevrin Robstad has a good idea to solve your problem -
create raid6 with one missing member. And add this member, when you
have it, next year or such.
On Saturday February 16, [EMAIL PROTECTED] wrote:
found was a few months old. Is it likely that RAID5 to RAID6
reshaping will be implemented in the next 12 to 18 months (my rough
Certainly possible.
I won't say it is likely until it is actually done. And by then it
will be definite :-)
i.e.
On Sun, 17 Feb 2008 11:50:25 +
[EMAIL PROTECTED] (Peter Grandi) wrote:
On Sat, 16 Feb 2008 20:58:07 -0700, Beolach
[EMAIL PROTECTED] said:
beolach [ ... ] start w/ 3 drives in RAID5, and add drives as I
beolach run low on free space, eventually to a total of 14
beolach drives (the max
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
Well, I was reading that LVM2 had a 20%-50% performance penalty,
huh? Make a benchmark. Do you really think that anyone would be using
it if there was any penalty bigger than 1-2% ? (random access, r/w).
I have no idea
23 matches
Mail list logo