Steve Peterson wrote:
Petri -- thanks for the idea.
I ran 2 dds in parallel; they took roughly twice as long in clock
time, and had about 1/2 the throughput of the single dd. On my system
it doesn't look like how the work is offered to the disk subsystem
matters.
This is the thing I did with similar results before I abandoned vinum
... The performance from the same disks using either graid3 or a real
hardware raid controller is significantly greater. I think there is
something in vinum blocking out parallelism.
I guess the fundamental question is this -- if I have a 4 disk
subsystem that supports an aggregate ~100MB/sec transfer raw to the
underlying disks, is it reasonable to expect a ~5MB/sec transfer rate
for a RAID5 hosted on that subsystem -- a 95% overhead.
In my opinion, no.
Pete
Steve
At 01:19 PM 10/28/2006, Petri Helenius wrote:
According to my understanding vinum does not overlap requests to
multiple disks when running in raid5 configuration so you're not
going to achieve good numbers with just "single stream" tests.
Pete
Steve Peterson wrote:
Eric -- thanks for looking at my issue. Here's a dd reading from
one of the disks underlying the array (the others have basically the
same performance):
$ time dd if=/dev/ad10 of=/dev/null bs=1m count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 15.322421 secs (68434095 bytes/sec)
0.008u 0.506s 0:15.33 3.2% 20+2715k 0+0io 0pf+0w
and here's a dd reading from the raw gvinum device /dev/gvinum/vol1:
$ time dd if=/dev/gvinum/vol1 of=/dev/null bs=1m count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 25.870684 secs (40531437 bytes/sec)
0.006u 0.552s 0:25.88 2.1% 23+3145k 0+0io 0pf+0w
Is there a way to nondestructively write to the raw disk or gvinum
device?
For comparison, here's a read against the raw PATA device on the
machine:
$ time dd if=/dev/ad0 of=/dev/null bs=1m count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 26.096070 secs (40181376 bytes/sec)
0.013u 0.538s 0:26.10 2.0% 24+3322k 0+0io 0pf+0w
Steve
At 11:14 PM 10/27/2006, Eric Anderson wrote:
On 10/27/06 18:03, Steve Peterson wrote:
I recently set up a media server for home use and decided to try
the gvinum raid5 support to learn about it and see how it
performs. It seems slower than I'd expect -- a little under
6MB/second, with about 50 IOs/drive/second -- and I'm trying to
understand why. Any assistance/pointers would be appreciated.
The disk system consists of 4 Seagate NL35 SATA ST3250623NS drives
connected to a Promise TX4300 (PDC40719) controller, organized as
a RAID5 volume via gvinum using this configuration:
drive drive01 device /dev/ad10
drive drive02 device /dev/ad6
drive drive03 device /dev/ad4
drive drive04 device /dev/ad8
volume vol1
plex org raid5 256k
sd length 200001m drive drive01
sd length 200001m drive drive02
sd length 200001m drive drive03
sd length 200001m drive drive04
dd reports the following performance on a 1G file write to the
RAID5 hosted volume:
$ time dd if=/dev/zero of=big.file bs=1m count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 179.717742 secs (5834571 bytes/sec)
179.76 real 0.02 user 16.60 sys
By comparison, creating the same file on the system disk (an old
ATA ST380021A connected via a SIS 730 on the motherboard):
time dd if=/dev/zero of=big.file bs=1m count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 28.264056 secs (37099275 bytes/sec)
28.32 real 0.01 user 19.13 sys
and vmstat reports about 280-300 IOs/second to that drive.
The CPU is pretty weak -- an Athlon 750. Is that the source of my
problem? If you look at the vmstat output below the machine is
busy but not pegged.
Try the dd to the raw gvinum device instead of through a
filesystem, and also to the individual disks. That will at least
tell us where to look.
Eric
--
------------------------------------------------------------------------
Eric Anderson Sr. Systems Administrator Centaur
Technology
Anything that works is better than anything that doesn't.
------------------------------------------------------------------------
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to
"[EMAIL PROTECTED]"
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to
"[EMAIL PROTECTED]"
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"