Hi!
I have played around with raid10,f2 on a 2 disk array set,
and I really liked the performance on the sequential reads.
It looked like double up on the speed, about 173 MB/s
for two SATA-2 disks.
I then went on to look at my 4 new SATS-2 disks, to have
the same kind of performance I made the
Hi
I have made some patches to hdparm to report min/max transfer rates,
and min/avg/max access times. Enjoy!
http://std.dkuug.dk/keld/hdparm-7.7-ks.tar.gz
Best regards
keld
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
Hi
I have tried to make a striping raid out of my new 4 x 1 TB
SATA-2 disks. I tried raid10,f2 in several ways:
1: md0 = raid10,f2 of sda1+sdb1, md1= raid10,f2 of sdc1+sdd1, md2 = raid0
of md0+md1
2: md0 = raid0 of sda1+sdb1, md1= raid0 of sdc1+sdd1, md2 = raid01,f2
of md0+md1
3: md0 =
On Mon, Jan 28, 2008 at 07:13:30AM +1100, Neil Brown wrote:
On Sunday January 27, [EMAIL PROTECTED] wrote:
Hi
I have tried to make a striping raid out of my new 4 x 1 TB
SATA-2 disks. I tried raid10,f2 in several ways:
1: md0 = raid10,f2 of sda1+sdb1, md1= raid10,f2 of sdc1+sdd1,
On Sun, Jan 27, 2008 at 08:11:35PM +, Peter Grandi wrote:
On Sun, 27 Jan 2008 20:33:45 +0100, Keld Jørn Simonsen
[EMAIL PROTECTED] said:
keld Hi I have tried to make a striping raid out of my new 4 x
keld 1 TB SATA-2 disks. I tried raid10,f2 in several ways:
keld 1: md0 = raid10,f2
On Mon, Jan 28, 2008 at 01:32:48PM -0500, Bill Davidsen wrote:
Neil Brown wrote:
On Sunday January 27, [EMAIL PROTECTED] wrote:
Hi
I have tried to make a striping raid out of my new 4 x 1 TB
SATA-2 disks. I tried raid10,f2 in several ways:
1: md0 = raid10,f2 of sda1+sdb1, md1=
On Tue, Jan 29, 2008 at 06:13:41PM +0300, Michael Tokarev wrote:
Linux raid10 MODULE (which implements that standard raid10
LEVEL in full) adds some quite.. unusual extensions to that
standard raid10 LEVEL. The resulting layout is also called
raid10 in linux (ie, not giving new names), but
On Tue, Jan 29, 2008 at 05:07:27PM +0300, Michael Tokarev wrote:
Peter Rabbitson wrote:
Moshe Yudkowsky wrote:
It is exactly what the names implies - a new kind of RAID :) The setup
you describe is not RAID10 it is RAID1+0.
Raid10 IS RAID1+0 ;)
It's just that linux raid10 driver can
On Tue, Jan 29, 2008 at 05:02:57AM -0600, Moshe Yudkowsky wrote:
Neil, thanks for writing. A couple of follow-up questions to you and the
group:
If the answers above don't lead to a resolution, I can create two RAID1
pairs and join them using LVM. I would take a hit by using LVM to tie
On Tue, Jan 29, 2008 at 09:57:48AM -0600, Moshe Yudkowsky wrote:
In my 4 drive system, I'm clearly not getting 1+0's ability to use grub
out of the RAID10. I expect it's because I used 1.2 superblocks (why
not use the latest, I said, foolishly...) and therefore the RAID10 --
with even
On Tue, Jan 29, 2008 at 07:51:07PM +0300, Michael Tokarev wrote:
Peter Rabbitson wrote:
[]
However if you want to be so anal about names and specifications: md
raid 10 is not a _full_ 1+0 implementation. Consider the textbook
scenario with 4 drives:
(A mirroring B) striped with (C
On Tue, Jan 29, 2008 at 07:46:58PM +0300, Michael Tokarev wrote:
Keld Jørn Simonsen wrote:
On Tue, Jan 29, 2008 at 06:13:41PM +0300, Michael Tokarev wrote:
Linux raid10 MODULE (which implements that standard raid10
LEVEL in full) adds some quite.. unusual extensions to that
standard
Hmm, I read the Linux raid faq on
http://www.faqs.org/contrib/linux-raid/x37.html
It looks pretty outdated, referring to how to patch 2.2 kernels and
not mentioning new mdadm, nor raid10. It was not dated.
It seemed to be related to the linux-raid list, telling where to find
archives of the
On Tue, Jan 29, 2008 at 01:34:37PM -0600, Moshe Yudkowsky wrote:
I'm going to convert back to the RAID 1 setup I had before for /boot, 2
hot and 2 spare across four drives. No, that's wrong: 4 hot makes the
most sense.
And given that RAID 10 doesn't seem to confer (for me, as far as I
On Tue, Jan 29, 2008 at 04:14:24PM -0600, Moshe Yudkowsky wrote:
Keld Jørn Simonsen wrote:
Based on your reports of better performance on RAID10 -- which are more
significant that I'd expected -- I'll just go with RAID10. The only
question now is if LVM is worth the performance hit
On Tue, Jan 29, 2008 at 06:32:54PM -0600, Moshe Yudkowsky wrote:
Hmm, why would you put swap on a raid10? I would in a production
environment always put it on separate swap partitions, possibly a number,
given that a number of drives are available.
In a production server, however, I'd use
On Wed, Jan 30, 2008 at 03:47:30PM +0100, Peter Rabbitson wrote:
Michael Tokarev wrote:
With 5-drive linux raid10:
A B C D E
0 0 1 1 2
2 3 3 4 4
5 5 6 6 7
7 8 8 9 9
10 10 11 11 12
...
AB can't be removed - 0, 5. AC CAN be removed, as
are
On Wed, Jan 30, 2008 at 07:21:33PM +0100, Janek Kozicki wrote:
Hello,
Yes, I know that some levels give faster reading and slower writing, etc.
I want to talk here about a typical workstation usage: compiling
stuff (like kernel), editing openoffice docs, browsing web, reading
email
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote:
Keld Jørn Simonsen said: (by the date of Wed, 30 Jan 2008 23:00:07 +0100)
Teoretically, raid0 and raid10,f2 should be the same for reading, given the
same size of the md partition, etc. For writing, raid10,f2 should be half
This is intended for the linux raid howto. Please give comments.
It is not fully ready /keld
Howto prepare for a failing disk
The following will describe how to prepare a system to survive
if one disk fails. This can be important for a server which is
intended to always run. The description is
On Sat, Feb 02, 2008 at 09:32:54PM +0100, Janek Kozicki wrote:
Keld Jørn Simonsen said: (by the date of Sat, 2 Feb 2008 20:41:31 +0100)
This is intended for the linux raid howto. Please give comments.
It is not fully ready /keld
very nice. do you intend to put it on http://linux
On Wed, Jan 30, 2008 at 06:47:19PM -0800, David Rees wrote:
On Jan 30, 2008 6:33 PM, Richard Scobie [EMAIL PROTECTED] wrote:
FWIW, this step is clearly marked in the Software-RAID HOWTO under
Booting on RAID:
http://tldp.org/HOWTO/Software-RAID-HOWTO-7.html#ss7.3
A good an extesive
I found a sentence in the HOWTO:
raid1 and raid 10 always writes all data to all disks
I think this is wrong for raid10.
eg
a raid10,f2 of 4 disks only writes to two of the disks -
not all 4 disks. Is that true?
best regards
keld
-
To unsubscribe from this list: send the line unsubscribe
On Sun, Feb 03, 2008 at 10:53:51AM -0500, Bill Davidsen wrote:
Keld Jørn Simonsen wrote:
This is intended for the linux raid howto. Please give comments.
It is not fully ready /keld
Howto prepare for a failing disk
6. /etc/mdadm.conf
Something here on /etc/mdadm.conf. What would
On Sun, Feb 03, 2008 at 10:56:01AM -0500, Bill Davidsen wrote:
Keld Jørn Simonsen wrote:
I found a sentence in the HOWTO:
raid1 and raid 10 always writes all data to all disks
I think this is wrong for raid10.
eg
a raid10,f2 of 4 disks only writes to two of the disks -
not all 4
I understand that lilo and grub only can boot partitions that look like
a normal single-drive partition. And then I understand that a plain
raid10 has a layout which is equivalent to raid1. Can such a raid10
partition be used with grub or lilo for booting?
And would there be any advantages in
On Mon, Feb 04, 2008 at 09:17:35AM +, Robin Hill wrote:
On Mon Feb 04, 2008 at 07:34:54AM +0100, Keld Jørn Simonsen wrote:
I understand that lilo and grub only can boot partitions that look like
a normal single-drive partition. And then I understand that a plain
raid10 has a layout
On Thu, Jan 31, 2008 at 02:55:07AM +0100, Keld Jørn Simonsen wrote:
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote:
Keld Jørn Simonsen said: (by the date of Wed, 30 Jan 2008 23:00:07
+0100)
All the raid10's will have double time for writing, and raid5 and raid6
Hi
I am looking at revising our howto. I see a number of places where a
chunk size of 32 kiB is recommended, and even recommendations on
maybe using sizes of 4 kiB.
My own take on that is that this really hurts performance.
Normal disks have a rotation speed of between 5400 (laptop)
7200
On Tue, Feb 05, 2008 at 11:54:27AM -0500, Justin Piszcz wrote:
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Thu, Jan 31, 2008 at 02:55:07AM +0100, Keld Jørn Simonsen wrote:
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote:
Keld Jørn Simonsen said: (by the date of Wed
On Tue, Feb 05, 2008 at 05:28:27PM -0500, Justin Piszcz wrote:
Could you give some figures?
I remember testing with bonnie++ and raid10 was about half the speed
(200-265 MiB/s) as RAID5 (400-420 MiB/s) for sequential output, but input
was closer to RAID5 speeds/did not seem affected
On Wed, Feb 06, 2008 at 08:24:37AM -0600, Moshe Yudkowsky wrote:
I read through the document, and I've signed up for a Wiki account so I
can edit it.
One of the things I wanted to do was correct the title. I see that there
are *three* different Wiki pages about how to build a system that
On Wed, Feb 06, 2008 at 10:05:58AM +0100, Luca Berra wrote:
On Sat, Feb 02, 2008 at 08:41:31PM +0100, Keld Jørn Simonsen wrote:
Make each of the disks bootable by lilo:
lilo -b /dev/sda /etc/lilo.conf1
lilo -b /dev/sdb /etc/lilo.conf2
There should be no need for that.
to achieve
On Wed, Feb 06, 2008 at 01:52:11PM -0500, Bill Davidsen wrote:
Keld Jørn Simonsen wrote:
I understand that lilo and grub only can boot partitions that look like
a normal single-drive partition. And then I understand that a plain
raid10 has a layout which is equivalent to raid1. Can
On Wed, Feb 06, 2008 at 09:25:36PM +0100, Wolfgang Denk wrote:
In message [EMAIL PROTECTED] you wrote:
I actually think the kernel should operate with block sizes
like this and not wth 4 kiB blocks. It is the readahead and the elevator
algorithms that save us from randomly reading 4
On Thu, Feb 07, 2008 at 09:05:04AM +0100, Luca Berra wrote:
On Wed, Feb 06, 2008 at 04:45:39PM +0100, Keld Jørn Simonsen wrote:
On Wed, Feb 06, 2008 at 10:05:58AM +0100, Luca Berra wrote:
On Sat, Feb 02, 2008 at 08:41:31PM +0100, Keld Jørn Simonsen wrote:
Make each of the disks bootable
On Thu, Feb 07, 2008 at 06:40:12AM +0100, Iustin Pop wrote:
On Thu, Feb 07, 2008 at 01:31:16AM +0100, Keld Jørn Simonsen wrote:
Anyway, why does a SATA-II drive not deliver something like 300 MB/s?
Wait, are you talking about a *single* drive?
Yes, I was talking about a single drive
Hi
I am trying to get some order to linux raid info.
I think we should have a faq and a howto for the linux-raid list.
The list description at
http://vger.kernel.org/vger-lists.html#linux-raid
does list af FAQ, http://www.linuxdoc.org/FAQ/
I cannot read it just now - the server
As I understand it, there are 2 valid algoritms for writing in raid5.
1. calculate the parity data by XOR'ing all data of the relevant data
chunks.
2. calculate the parity data by kind of XOR-subtracting the old data to
be changed, and then XOR-adding the new data. (XOR-subtract and XOR-add
is
On Fri, Feb 08, 2008 at 07:25:31AM +1100, Neil Brown wrote:
On Thursday February 7, [EMAIL PROTECTED] wrote:
As I understand it, there are 2 valid algoritms for writing in raid5.
1. calculate the parity data by XOR'ing all data of the relevant data
chunks.
2. calculate the parity
On Fri, Feb 08, 2008 at 12:51:39PM +1100, Neil Brown wrote:
On Friday February 8, [EMAIL PROTECTED] wrote:
On Fri, Feb 08, 2008 at 07:25:31AM +1100, Neil Brown wrote:
On Thursday February 7, [EMAIL PROTECTED] wrote:
So I hereby give the idea for inspiration to kernel hackers.
On Sun, Feb 10, 2008 at 10:05:13AM +, David Greaves wrote:
Keld Jørn Simonsen wrote:
The list description at
http://vger.kernel.org/vger-lists.html#linux-raid
does list af FAQ, http://www.linuxdoc.org/FAQ/
Yes, that should be amended. Drop them a line about the FAQ too
I will.
So
On Sun, Feb 10, 2008 at 06:21:08PM +, David Greaves wrote:
Keld Jørn Simonsen wrote:
I would then like that to be reflected in the main page.
I would rather that this be called Howto and FAQ - Linux raid
than Main Page - Linux Raid. Is that possible?
Just like C has a main() wiki's
Here are my testing scripts used in the performance howto:
http://linux-raid.osdl.org/index.php/Home_grown_testing_methods
=Hard disk performance scripts=
Here are the scripts that I used for my performance measuring. Use at your own
risk.
They destroy the contents of the partitions involved.
I have put up a new howto text on performance:
http://linux-raid.osdl.org/index.php/Performance#Performance_of_raids_with_2_disks
Enjoy!
Keld
=Performance of raids with 2 disks=
I have made some testing of performance of different types of RAIDs,
with 2 disks involved. I have used my own home
This patch changes the disk to be read for layout far 1 to always be
the disk with the lowest block address.
Thus the chunks to be read will always be (for a fully functioning array)
from the first band of stripes, and the raid will then work as a raid0
consisting of the first band of stripes.
Hi
any opinions on suns zfs/raid-z?
It seems like a good way to avoid the performance problems of raid-5
/raid-6
But does it stripe? One could think that rewriting stripes
other places would damage the striping effects.
Or is the performance only meant to be good for random read/write?
Can the
On Mon, Feb 18, 2008 at 09:51:15PM +1100, Neil Brown wrote:
On Monday February 18, [EMAIL PROTECTED] wrote:
On Mon, Feb 18, 2008 at 03:07:44PM +1100, Neil Brown wrote:
On Sunday February 17, [EMAIL PROTECTED] wrote:
Hi
It seems like a good way to avoid the performance
I made a reference to your work in the wiki howto on performance.
Thanks!
Keld
On Fri, Feb 22, 2008 at 04:14:05AM +, Nat Makarevitch wrote:
'md' performs wonderfully. Thanks to every contributor!
I pitted it against a 3ware 9650 and 'md' won on nearly every account (albeit
on
RAID5
49 matches
Mail list logo