Mark Hahn wrote:
which is right at the edge of what I need. I want to read the doc on
stripe_cache_size before going huge, if that's K 10MB is a LOT of
cache
when 256 works perfectly in RAID-0.
but they are basically unrelated. in r5/6, the stripe cache is
absolutely
critical in caching
which is right at the edge of what I need. I want to read the doc on
stripe_cache_size before going huge, if that's K 10MB is a LOT of cache
when 256 works perfectly in RAID-0.
but they are basically unrelated. in r5/6, the stripe cache is absolutely
critical in caching parity chunks. in r0,
Neil Brown wrote:
On Friday December 8, [EMAIL PROTECTED] wrote:
I have measured very slow write throughput for raid5 as well, though
2.6.18 does seem to have the same problem. I'll double check and do a
git bisect and see what I can come up with.
Correction... it isn't 2.6.18 that
On 12/12/06, Bill Davidsen [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Friday December 8, [EMAIL PROTECTED] wrote:
I have measured very slow write throughput for raid5 as well, though
2.6.18 does seem to have the same problem. I'll double check and do a
git bisect and see what I can come
Raz Ben-Jehuda(caro) wrote:
On 12/12/06, Bill Davidsen [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Friday December 8, [EMAIL PROTECTED] wrote:
I have measured very slow write throughput for raid5 as well, though
2.6.18 does seem to have the same problem. I'll double check and
do a
git
Neil Brown wrote:
On Friday December 8, [EMAIL PROTECTED] wrote:
I have measured very slow write throughput for raid5 as well, though
2.6.18 does seem to have the same problem. I'll double check and do a
git bisect and see what I can come up with.
Correction... it isn't 2.6.18 that
On Thu, Dec 07, 2006 at 10:51:25AM -0500, Bill Davidsen wrote:
I also suspect that write are not being combined, since writing the 2GB
test runs at one-drive speed writing 1MB blocks, but floppy speed
writing 2k blocks. And no, I'm not running out of CPU to do the
overhead, it jumps from
Dan Williams wrote:
On 12/1/06, Bill Davidsen [EMAIL PROTECTED] wrote:
Thank you so much for verifying this. I do keep enough room on my drives
to run tests by creating any kind of whatever I need, but the point is
clear: with N drives striped the transfer rate is N x base rate of one
drive;
Bill Davidsen wrote:
Dan Williams wrote:
On 12/1/06, Bill Davidsen [EMAIL PROTECTED] wrote:
Thank you so much for verifying this. I do keep enough room on my drives
to run tests by creating any kind of whatever I need, but the point is
clear: with N drives striped the transfer rate is N x base
On Monday December 4, [EMAIL PROTECTED] wrote:
Here is where I step into supposition territory. Perhaps the
discrepancy is related to the size of the requests going to the block
layer. raid5 always makes page sized requests with the expectation
that they will coalesce into larger requests
On Friday December 8, [EMAIL PROTECTED] wrote:
I have measured very slow write throughput for raid5 as well, though
2.6.18 does seem to have the same problem. I'll double check and do a
git bisect and see what I can come up with.
Correction... it isn't 2.6.18 that fixes the problem. It is
On 12/1/06, Bill Davidsen [EMAIL PROTECTED] wrote:
Thank you so much for verifying this. I do keep enough room on my drives
to run tests by creating any kind of whatever I need, but the point is
clear: with N drives striped the transfer rate is N x base rate of one
drive; with RAID-5 it is about
Roger Lucas wrote:
What drive configuration are you using (SCSI / ATA / SATA), what
chipset
is
providing the disk interface and what cpu are you running with?
3xSATA, Seagate 320 ST3320620AS, Intel 6600, ICH7 controller using the
ata-piix driver, with drive cache set to
Roger Lucas wrote:
Roger Lucas wrote:
What drive configuration are you using (SCSI / ATA / SATA), what
chipset
is
providing the disk interface and what cpu are you running with?
3xSATA, Seagate 320 ST3320620AS, Intel 6600, ICH7 controller using the
ata-piix driver, with drive cache set to
Pardon if you see this twice, I sent it last night and it never showed up...
I was seeing some bad disk performance on a new install of Fedora Core
6, so I did some measurements of write speed, and it would appear that
write performance is so slow it can't write my data as fast as it is
-Original Message-
From: [EMAIL PROTECTED] [mailto:linux-raid-
[EMAIL PROTECTED] On Behalf Of Bill Davidsen
Sent: 30 November 2006 14:13
To: linux-raid@vger.kernel.org
Subject: Odd (slow) RAID performance
Pardon if you see this twice, I sent it last night and it never showed
up
Roger Lucas wrote:
-Original Message-
From: [EMAIL PROTECTED] [mailto:linux-raid-
[EMAIL PROTECTED] On Behalf Of Bill Davidsen
Sent: 30 November 2006 14:13
To: linux-raid@vger.kernel.org
Subject: Odd (slow) RAID performance
Pardon if you see this twice, I sent it last night and it never
What drive configuration are you using (SCSI / ATA / SATA), what chipset
is
providing the disk interface and what cpu are you running with?
3xSATA, Seagate 320 ST3320620AS, Intel 6600, ICH7 controller using the
ata-piix driver, with drive cache set to write-back. It's not obvious to
me why
Roger Lucas wrote:
What drive configuration are you using (SCSI / ATA / SATA), what chipset
is
providing the disk interface and what cpu are you running with?
3xSATA, Seagate 320 ST3320620AS, Intel 6600, ICH7 controller using the
ata-piix driver, with drive cache set to
19 matches
Mail list logo