Hi,
>Also, md raid10 seems to have the same problem.
>I will test raid10 applying this patch as well.
Sorry for the late response. I had a trouble with reproducing the problem,
but it turns out that the 2.6.24 kernel needs the latest (possibly testing)
version of systemtap-0.6.1-1 to run systemta
Keld Jørn Simonsen wrote:
On Tue, Jan 29, 2008 at 06:32:54PM -0600, Moshe Yudkowsky wrote:
Hmm, why would you put swap on a raid10? I would in a production
environment always put it on separate swap partitions, possibly a number,
given that a number of drives are available.
In a production serve
On Tue, Jan 29, 2008 at 06:32:54PM -0600, Moshe Yudkowsky wrote:
>
> >Hmm, why would you put swap on a raid10? I would in a production
> >environment always put it on separate swap partitions, possibly a number,
> >given that a number of drives are available.
>
> In a production server, however,
Hmm, why would you put swap on a raid10? I would in a production
environment always put it on separate swap partitions, possibly a number,
given that a number of drives are available.
I put swap onto non-RAID, separate partitions on all 4 disks.
In a production server, however, I'd use swap o
Keld Jørn Simonsen wrote:
On Tue, Jan 29, 2008 at 06:44:20PM -0500, Bill Davidsen wrote:
Depending on near/far choices, raid10 should be faster than raid5, with
far read should be quite a bit faster. You can't boot off raid10, and if
you put your swap on it many recovery CDs won't use it. But
On Tue, Jan 29, 2008 at 06:44:20PM -0500, Bill Davidsen wrote:
> Depending on near/far choices, raid10 should be faster than raid5, with
> far read should be quite a bit faster. You can't boot off raid10, and if
> you put your swap on it many recovery CDs won't use it. But for general
> use and
On Tue, Jan 29, 2008 at 04:14:24PM -0600, Moshe Yudkowsky wrote:
> Keld Jørn Simonsen wrote:
>
> Based on your reports of better performance on RAID10 -- which are more
> significant that I'd expected -- I'll just go with RAID10. The only
> question now is if LVM is worth the performance hit or
Bill Davidsen wrote:
According to man md(4), the o2 is likely to offer the best combination
of read and write performance. Why would you consider f2 instead?
f2 is faster for read, most systems spend more time reading than writing.
According to md(4), offset "should give similar read charac
Moshe Yudkowsky wrote:
Keld Jørn Simonsen wrote:
Based on your reports of better performance on RAID10 -- which are
more significant that I'd expected -- I'll just go with RAID10. The
only question now is if LVM is worth the performance hit or not.
I would be interested if you would experim
Moshe Yudkowsky wrote:
I'd like to thank everyone who wrote in with comments and
explanations. And in particular it's nice to see that I'm not the only
one who's confused.
I'm going to convert back to the RAID 1 setup I had before for /boot,
2 hot and 2 spare across four drives. No, that's wr
David Greaves wrote:
Jan Engelhardt wrote:
This makes 1.0 the default sb type for new arrays.
IIRC there was a discussion a while back on renaming mdadm options (google "Time
to deprecate old RAID formats?") and the superblocks to emphasise the location
and data structure. Would it b
Carlos Carvalho wrote:
Tim Southerwood ([EMAIL PROTECTED]) wrote on 28 January 2008 17:29:
>Subtitle: Patch to mainline yet?
>
>Hi
>
>I don't see evidence of Neil's patch in 2.6.24, so I applied it by hand
>on my server.
I applied all 4 pending patches to .24. It's been better than .22 and
Keld Jørn Simonsen wrote:
Based on your reports of better performance on RAID10 -- which are more
significant that I'd expected -- I'll just go with RAID10. The only
question now is if LVM is worth the performance hit or not.
I would be interested if you would experiment with this wrt boot t
On Tue, Jan 29, 2008 at 01:34:37PM -0600, Moshe Yudkowsky wrote:
>
> I'm going to convert back to the RAID 1 setup I had before for /boot, 2
> hot and 2 spare across four drives. No, that's wrong: 4 hot makes the
> most sense.
>
> And given that RAID 10 doesn't seem to confer (for me, as far as
Keld Jørn Simonsen said: (by the date of Tue, 29 Jan 2008 20:17:55 +0100)
> Hmm, I read the Linux raid faq on
> http://www.faqs.org/contrib/linux-raid/x37.html
I've found some information in
/usr/share/doc/mdadm/FAQ.gz
I'm wondering why this file is not advertised anywhere
(eg. in 'man mda
I'd like to thank everyone who wrote in with comments and explanations.
And in particular it's nice to see that I'm not the only one who's confused.
I'm going to convert back to the RAID 1 setup I had before for /boot, 2
hot and 2 spare across four drives. No, that's wrong: 4 hot makes the
mos
Hmm, I read the Linux raid faq on
http://www.faqs.org/contrib/linux-raid/x37.html
It looks pretty outdated, referring to how to patch 2.2 kernels and
not mentioning new mdadm, nor raid10. It was not dated.
It seemed to be related to the linux-raid list, telling where to find
archives of the list.
Bruce Miller wrote:
The beginning of Section 4 of the Linux Sotfware-RAID-HOWTO
states emphatically that "you should only have one device per
IDE bus. Running disks as master/slave is horrible for
performance. IDE is really bad at accessing more that one drive
per bus".
Do the same cautions appl
The beginning of Section 4 of the Linux Sotfware-RAID-HOWTO
states emphatically that "you should only have one device per
IDE bus. Running disks as master/slave is horrible for
performance. IDE is really bad at accessing more that one drive
per bus".
Do the same cautions apply to building a RAID a
On Tue, Jan 29, 2008 at 07:46:58PM +0300, Michael Tokarev wrote:
> Keld Jørn Simonsen wrote:
> > On Tue, Jan 29, 2008 at 06:13:41PM +0300, Michael Tokarev wrote:
> >> Linux raid10 MODULE (which implements that standard raid10
> >> LEVEL in full) adds some quite.. unusual extensions to that
> >> sta
On Tue, Jan 29, 2008 at 07:51:07PM +0300, Michael Tokarev wrote:
> Peter Rabbitson wrote:
> []
> > However if you want to be so anal about names and specifications: md
> > raid 10 is not a _full_ 1+0 implementation. Consider the textbook
> > scenario with 4 drives:
> >
> > (A mirroring B) striped
Keld Jørn Simonsen wrote:
> On Tue, Jan 29, 2008 at 09:57:48AM -0600, Moshe Yudkowsky wrote:
>> In my 4 drive system, I'm clearly not getting 1+0's ability to use grub
>> out of the RAID10. I expect it's because I used 1.2 superblocks (why
>> not use the latest, I said, foolishly...) and therefo
Peter Rabbitson wrote:
[]
> However if you want to be so anal about names and specifications: md
> raid 10 is not a _full_ 1+0 implementation. Consider the textbook
> scenario with 4 drives:
>
> (A mirroring B) striped with (C mirroring D)
>
> When only drives A and C are present, md raid 10 with
Keld Jørn Simonsen wrote:
> On Tue, Jan 29, 2008 at 06:13:41PM +0300, Michael Tokarev wrote:
>> Linux raid10 MODULE (which implements that standard raid10
>> LEVEL in full) adds some quite.. unusual extensions to that
>> standard raid10 LEVEL. The resulting layout is also called
>> raid10 in linux
Moshe Yudkowsky wrote:
> Michael Tokarev wrote:
>
>> There are more-or-less standard raid LEVELS, including
>> raid10 (which is the same as raid1+0, or a stripe on top
>> of mirrors - note it does not mean 4 drives, you can
>> use 6 - stripe over 3 mirrors each of 2 components; or
>> the reverse -
On Tue, Jan 29, 2008 at 09:57:48AM -0600, Moshe Yudkowsky wrote:
>
> In my 4 drive system, I'm clearly not getting 1+0's ability to use grub
> out of the RAID10. I expect it's because I used 1.2 superblocks (why
> not use the latest, I said, foolishly...) and therefore the RAID10 --
> with eve
Moshe Yudkowsky wrote:
Here's a baseline question: if I create a RAID10 array using default
settings, what do I get? I thought I was getting RAID1+0; am I really?
Maybe you are, depending on your settings, but this is beyond the point. No
matter what 1+0 you have (linux, classic, or otherwise)
On Tue, Jan 29, 2008 at 06:13:41PM +0300, Michael Tokarev wrote:
>
> Linux raid10 MODULE (which implements that standard raid10
> LEVEL in full) adds some quite.. unusual extensions to that
> standard raid10 LEVEL. The resulting layout is also called
> raid10 in linux (ie, not giving new names),
Moshe Yudkowsky wrote:
Keld Jørn Simonsen wrote:
raid10 have a number of ways to do layout, namely the near, far and
offset ways, layout=n2, f2, o2 respectively.
The default layout, according to --detail, is "near=2, far=1." If I
understand what's been written so far on the topic, that's aut
Michael Tokarev wrote:
There are more-or-less standard raid LEVELS, including
raid10 (which is the same as raid1+0, or a stripe on top
of mirrors - note it does not mean 4 drives, you can
use 6 - stripe over 3 mirrors each of 2 components; or
the reverse - stripe over 2 mirrors of 3 components e
Peter Rabbitson wrote:
[*] The layout is the same but the functionality is different. If you
have 1+0 on 4 drives, you can survive a loss of 2 drives as long as they
are part of different mirrors. mdadm -C -l 10 -n 4 -o n2
however will _NOT_ survive a loss of 2 drives.
In my 4 drive system,
Keld Jørn Simonsen wrote:
raid10 have a number of ways to do layout, namely the near, far and
offset ways, layout=n2, f2, o2 respectively.
The default layout, according to --detail, is "near=2, far=1." If I
understand what's been written so far on the topic, that's automatically
incompatible
Michael Tokarev wrote:
Linux raid10 MODULE (which implements that standard raid10
LEVEL in full) adds some quite.. unusual extensions to that
standard raid10 LEVEL. The resulting layout is also called
raid10 in linux (ie, not giving new names), but it's not that
raid10 (which is again the same a
Peter Rabbitson wrote:
> Michael Tokarev wrote:
> > Raid10 IS RAID1+0 ;)
>> It's just that linux raid10 driver can utilize more.. interesting ways
>> to lay out the data.
>
> This is misleading, and adds to the confusion existing even before linux
> raid10. When you say raid10 in the hardware rai
On Tue, Jan 29, 2008 at 05:07:27PM +0300, Michael Tokarev wrote:
> Peter Rabbitson wrote:
> > Moshe Yudkowsky wrote:
> >>
>
> > It is exactly what the names implies - a new kind of RAID :) The setup
> > you describe is not RAID10 it is RAID1+0.
>
> Raid10 IS RAID1+0 ;)
> It's just that linux raid
Michael Tokarev wrote:
> Raid10 IS RAID1+0 ;)
It's just that linux raid10 driver can utilize more.. interesting ways
to lay out the data.
This is misleading, and adds to the confusion existing even before linux
raid10. When you say raid10 in the hardware raid world, what do you mean?
Stripes
Tim Southerwood ([EMAIL PROTECTED]) wrote on 28 January 2008 17:29:
>Subtitle: Patch to mainline yet?
>
>Hi
>
>I don't see evidence of Neil's patch in 2.6.24, so I applied it by hand
>on my server.
I applied all 4 pending patches to .24. It's been better than .22 and
.23... Unfortunately the
Moshe Yudkowsky wrote:
> Peter Rabbitson wrote:
>
>> It is exactly what the names implies - a new kind of RAID :) The setup
>> you describe is not RAID10 it is RAID1+0. As far as how linux RAID10
>> works - here is an excellent article:
>> http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linu
Peter Rabbitson wrote:
> Moshe Yudkowsky wrote:
>>
>> One of the puzzling things about this is that I conceive of RAID10 as
>> two RAID1 pairs, with RAID0 on top of to join them into a large drive.
>> However, when I use --level=10 to create my md drive, I cannot find
>> out which two pairs are th
On Tue, Jan 29, 2008 at 05:02:57AM -0600, Moshe Yudkowsky wrote:
> Neil, thanks for writing. A couple of follow-up questions to you and the
> group:
>
> If the answers above don't lead to a resolution, I can create two RAID1
> pairs and join them using LVM. I would take a hit by using LVM to tie
Peter Rabbitson wrote:
It is exactly what the names implies - a new kind of RAID :) The setup
you describe is not RAID10 it is RAID1+0. As far as how linux RAID10
works - here is an excellent article:
http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
Thanks. Let's just s
* The only raid level providing unfettered access to the underlying
filesystem is RAID1 with a superblock at its end, and it has been common
wisdom for years that you need RAID1 boot partition in order to boot
anything at all.
Ah. This shines light on my problem...
The problem is that these
Moshe Yudkowsky wrote:
One of the puzzling things about this is that I conceive of RAID10 as
two RAID1 pairs, with RAID0 on top of to join them into a large drive.
However, when I use --level=10 to create my md drive, I cannot find out
which two pairs are the RAID1's: the --detail doesn't gi
Neil, thanks for writing. A couple of follow-up questions to you and the
group:
Neil Brown wrote:
On Monday January 28, [EMAIL PROTECTED] wrote:
Perhaps I'm mistaken but I though it was possible to do boot from
/dev/md/all1.
It is my understanding that grub cannot boot from RAID.
Ah. Well,
On Tuesday 29 January 2008 20:13, Peter Rabbitson <[EMAIL PROTECTED]>
wrote:
> Russell Coker wrote:
> > Are there plans for supporting a NVRAM write-back cache with Linux
> > software RAID?
>
> AFAIK even today you can place the bitmap in an external file residing on a
> file system which in turn
Tim Southerwood wrote:
David Greaves wrote:
IIRC Doug Leford did some digging wrt lilo + grub and found that 1.1 and 1.2
wouldn't work with them. I'd have to review the thread though...
David
-
For what it's worth, that was my finding too. -e 0.9+1.0 are fine with
GRUB, but 1.1 an 1.2 won't
Russell Coker wrote:
Are there plans for supporting a NVRAM write-back cache with Linux software
RAID?
AFAIK even today you can place the bitmap in an external file residing on a
file system which in turn can reside on the nvram...
Peter
-
To unsubscribe from this list: send the line "uns
47 matches
Mail list logo