Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance (fixed)

2010-02-27 Thread Dan Naumov
Hello folks

A few weeks ago, there was a discussion started by me regarding
abysmal read/write performance using ZFS mirror on 8.0-RELEASE. I was
using an Atom 330 system with 2GB ram and it was pointed out to me
that my problem was most likely having both disks attached to a PCI
SIL3124 controller, switching to the new AHCI drivers didn't help one
bit. To reitirate, here are the Bonnie and DD numbers I got on that
system:

===

Atom 330 / 2gb ram / Intel board + PCI SIL3124

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
 8192 21041 53.5 22644 19.4 13724 12.8 25321 48.5 43110 14.0 143.2 3.3

dd if=/dev/zero of=/root/test1 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec) (28,4 mb/s)

===

Since then, I switched the exact same disks to a different system:
Atom D510 / 4gb ram / Supermicro X7SPA-H / ICH9R controller (native).
Here are the updated results:

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 8192 30057 68.7 50965 36.4 27236 21.3 33317 58.0 53051 14.3 172.4  3.2

dd if=/dev/zero of=/root/test1 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 54.977978 secs (78121594 bytes/sec) (74,5 mb/s)

===

Write performance now seems to have increased by a factor of 2 to 3
and is now definately in line with the expected performance of the
disks in question (cheap 2TB WD20EADS with 32mb cache). Thanks to
everyone who has offered help and tips!


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-02-03 Thread James R. Van Artsdalen
Dan Naumov wrote:
 [j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test2 bs=1M count=4096
 4096+0 records in
 4096+0 records out
 4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec)

 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s
For the record, better results can be seen.  In my test I put 3 Seagate
Barracuda XT drives in a port multiplier and connected that to one port
of a PCIe 3124 card.

The MIRROR case is at about the I/O bandwidth limit of those drives.

[r...@kraken ~]# zpool create tmpx ada{2,3,4}  
[r...@kraken ~]# dd if=/dev/zero of=/tmpx/test2 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 20.892818 secs (205571470 bytes/sec)
[r...@kraken ~]# zpool destroy tmpx
[r...@kraken ~]# zpool create tmpx mirror ada{2,3}
[r...@kraken ~]# dd if=/dev/zero of=/tmpx/test2 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 36.432818 secs (117887321 bytes/sec)
[r...@kraken ~]#

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Bob Friesenhahn

On Mon, 25 Jan 2010, Dan Naumov wrote:


I've checked with the manufacturer and it seems that the Sil3124 in
this NAS is indeed a PCI card. More info on the card in question is
available at http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
I have the card described later on the page, the one with 4 SATA ports
and no eSATA. Alright, so it being PCI is probably a bottleneck in
some ways, but that still doesn't explain the performance THAT bad,
considering that same hardware, same disks, same disk controller push
over 65mb/s in both reads and writes in Win2008. And agian, I am
pretty sure that I've had close to expected results when I was


The slow PCI bus and this card look like the bottleneck to me. 
Remember that your Win2008 tests were with just one disk, your zfs 
performance with just one disk was similar to Win2008, and your zfs 
performance with a mirror was just under 1/2 that.


I don't think that your performance results are necessarily out of 
line for the hardware you are using.


On an old Sun SPARC workstation with retrofitted 15K RPM drives on 
Ultra-160 SCSI channel, I see a zfs mirror write performance of 
67,317KB/second and a read performance of 124,347KB/second.  The 
drives themselves are capable of 100MB/second range performance. 
Similar to yourself, I see 1/2 the write performance due to bandwidth 
limitations.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Dan Naumov
On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Mon, 25 Jan 2010, Dan Naumov wrote:

 I've checked with the manufacturer and it seems that the Sil3124 in
 this NAS is indeed a PCI card. More info on the card in question is
 available at
 http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
 I have the card described later on the page, the one with 4 SATA ports
 and no eSATA. Alright, so it being PCI is probably a bottleneck in
 some ways, but that still doesn't explain the performance THAT bad,
 considering that same hardware, same disks, same disk controller push
 over 65mb/s in both reads and writes in Win2008. And agian, I am
 pretty sure that I've had close to expected results when I was

 The slow PCI bus and this card look like the bottleneck to me. Remember that
 your Win2008 tests were with just one disk, your zfs performance with just
 one disk was similar to Win2008, and your zfs performance with a mirror was
 just under 1/2 that.

 I don't think that your performance results are necessarily out of line for
 the hardware you are using.

 On an old Sun SPARC workstation with retrofitted 15K RPM drives on Ultra-160
 SCSI channel, I see a zfs mirror write performance of 67,317KB/second and a
 read performance of 124,347KB/second.  The drives themselves are capable of
 100MB/second range performance. Similar to yourself, I see 1/2 the write
 performance due to bandwidth limitations.

 Bob

There is lots of very sweet irony in my particular situiation.
Initially I was planning to use a single X25-M 80gb SSD in the
motherboard sata port for the actual OS installation as well as to
dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
mirrors. The SSD attached to the motherboard port would be recognized
only as a SATA150 device for some reason, but I was still seeing
150mb/s throughput and sub 0.1 ms latencies on that disk simply
because of how crazy good the X25-M's are. However I ended up having
very bad issues with the Icydock 2,5 to 3,5 converter jacket I was
using to keep/fit the SSD in the system and it would randomly drop
write IO on heavy load due to bad connectors. Having finally figured
out the cause of my OS installations to the SSD going belly up during
applying updates, I decided to move the SSD to my desktop and use it
there instead, additionally thinking that my perhaps my idea of the
SSD was crazy overkill for what I need the system to do. Ironically
now that I am seeing how horrible the performance is when I am
operating on the mirror through this PCI card, I realize that
actually, my idea was pretty bloody brilliant, I just didn't really
know why at the time.

An L2ARC device on the motherboard port would really help me with
random read IO, but to work around the utterly poor write performance,
I would also need a dedicaled SLOG ZIL device. The catch is that while
L2ARC devices and be removed from the pool at will (should the device
up and die all of a sudden), the dedicated ZILs cannot and currently a
missing ZIL device will render the pool it's included in be unable
to import and become inaccessible. There is some work happening in
Solaris to implement removing SLOGs from a pool, but that work hasn't
yet found it's way in FreeBSD yet.


- Sincerely,
Dan Naumov

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Dan Naumov
On Mon, Jan 25, 2010 at 9:34 AM, Dan Naumov dan.nau...@gmail.com wrote:
 On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
 bfrie...@simple.dallas.tx.us wrote:
 On Mon, 25 Jan 2010, Dan Naumov wrote:

 I've checked with the manufacturer and it seems that the Sil3124 in
 this NAS is indeed a PCI card. More info on the card in question is
 available at
 http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
 I have the card described later on the page, the one with 4 SATA ports
 and no eSATA. Alright, so it being PCI is probably a bottleneck in
 some ways, but that still doesn't explain the performance THAT bad,
 considering that same hardware, same disks, same disk controller push
 over 65mb/s in both reads and writes in Win2008. And agian, I am
 pretty sure that I've had close to expected results when I was

 The slow PCI bus and this card look like the bottleneck to me. Remember that
 your Win2008 tests were with just one disk, your zfs performance with just
 one disk was similar to Win2008, and your zfs performance with a mirror was
 just under 1/2 that.

 I don't think that your performance results are necessarily out of line for
 the hardware you are using.

 On an old Sun SPARC workstation with retrofitted 15K RPM drives on Ultra-160
 SCSI channel, I see a zfs mirror write performance of 67,317KB/second and a
 read performance of 124,347KB/second.  The drives themselves are capable of
 100MB/second range performance. Similar to yourself, I see 1/2 the write
 performance due to bandwidth limitations.

 Bob

 There is lots of very sweet irony in my particular situiation.
 Initially I was planning to use a single X25-M 80gb SSD in the
 motherboard sata port for the actual OS installation as well as to
 dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
 mirrors. The SSD attached to the motherboard port would be recognized
 only as a SATA150 device for some reason, but I was still seeing
 150mb/s throughput and sub 0.1 ms latencies on that disk simply
 because of how crazy good the X25-M's are. However I ended up having
 very bad issues with the Icydock 2,5 to 3,5 converter jacket I was
 using to keep/fit the SSD in the system and it would randomly drop
 write IO on heavy load due to bad connectors. Having finally figured
 out the cause of my OS installations to the SSD going belly up during
 applying updates, I decided to move the SSD to my desktop and use it
 there instead, additionally thinking that my perhaps my idea of the
 SSD was crazy overkill for what I need the system to do. Ironically
 now that I am seeing how horrible the performance is when I am
 operating on the mirror through this PCI card, I realize that
 actually, my idea was pretty bloody brilliant, I just didn't really
 know why at the time.

 An L2ARC device on the motherboard port would really help me with
 random read IO, but to work around the utterly poor write performance,
 I would also need a dedicaled SLOG ZIL device. The catch is that while
 L2ARC devices and be removed from the pool at will (should the device
 up and die all of a sudden), the dedicated ZILs cannot and currently a
 missing ZIL device will render the pool it's included in be unable
 to import and become inaccessible. There is some work happening in
 Solaris to implement removing SLOGs from a pool, but that work hasn't
 yet found it's way in FreeBSD yet.


 - Sincerely,
 Dan Naumov

OK final question: if/when I go about adding more disks to the system
and want redundancy, am I right in thinking that: ZFS pool of
disk1+disk2 mirror + disk3+disk4 mirror (a la RAID10) would completely
murder my write and read performance even way below the current 28mb/s
/ 50mb/s I am seeing with 2 disks on that PCI controller and that in
order to have the least negative impact, I should simply have 2
independent mirrors in 2 independent pools (with the 5th disk slot in
the NAS given to a non-redundant single disk running off the one
available SATA port on the motherboard)?

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Thomas Burgess
It depends on the bandwidth of the bus that it is on and the controller
itself.

I like to use pci-x with aoc-sat2-mv8 cards or pci-e cardsthat way you
get a lot more bandwidth..

On Mon, Jan 25, 2010 at 3:32 AM, Dan Naumov dan.nau...@gmail.com wrote:

 On Mon, Jan 25, 2010 at 9:34 AM, Dan Naumov dan.nau...@gmail.com wrote:
  On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
  bfrie...@simple.dallas.tx.us wrote:
  On Mon, 25 Jan 2010, Dan Naumov wrote:
 
  I've checked with the manufacturer and it seems that the Sil3124 in
  this NAS is indeed a PCI card. More info on the card in question is
  available at
  http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
  I have the card described later on the page, the one with 4 SATA ports
  and no eSATA. Alright, so it being PCI is probably a bottleneck in
  some ways, but that still doesn't explain the performance THAT bad,
  considering that same hardware, same disks, same disk controller push
  over 65mb/s in both reads and writes in Win2008. And agian, I am
  pretty sure that I've had close to expected results when I was
 
  The slow PCI bus and this card look like the bottleneck to me. Remember
 that
  your Win2008 tests were with just one disk, your zfs performance with
 just
  one disk was similar to Win2008, and your zfs performance with a mirror
 was
  just under 1/2 that.
 
  I don't think that your performance results are necessarily out of line
 for
  the hardware you are using.
 
  On an old Sun SPARC workstation with retrofitted 15K RPM drives on
 Ultra-160
  SCSI channel, I see a zfs mirror write performance of 67,317KB/second
 and a
  read performance of 124,347KB/second.  The drives themselves are capable
 of
  100MB/second range performance. Similar to yourself, I see 1/2 the write
  performance due to bandwidth limitations.
 
  Bob
 
  There is lots of very sweet irony in my particular situiation.
  Initially I was planning to use a single X25-M 80gb SSD in the
  motherboard sata port for the actual OS installation as well as to
  dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
  mirrors. The SSD attached to the motherboard port would be recognized
  only as a SATA150 device for some reason, but I was still seeing
  150mb/s throughput and sub 0.1 ms latencies on that disk simply
  because of how crazy good the X25-M's are. However I ended up having
  very bad issues with the Icydock 2,5 to 3,5 converter jacket I was
  using to keep/fit the SSD in the system and it would randomly drop
  write IO on heavy load due to bad connectors. Having finally figured
  out the cause of my OS installations to the SSD going belly up during
  applying updates, I decided to move the SSD to my desktop and use it
  there instead, additionally thinking that my perhaps my idea of the
  SSD was crazy overkill for what I need the system to do. Ironically
  now that I am seeing how horrible the performance is when I am
  operating on the mirror through this PCI card, I realize that
  actually, my idea was pretty bloody brilliant, I just didn't really
  know why at the time.
 
  An L2ARC device on the motherboard port would really help me with
  random read IO, but to work around the utterly poor write performance,
  I would also need a dedicaled SLOG ZIL device. The catch is that while
  L2ARC devices and be removed from the pool at will (should the device
  up and die all of a sudden), the dedicated ZILs cannot and currently a
  missing ZIL device will render the pool it's included in be unable
  to import and become inaccessible. There is some work happening in
  Solaris to implement removing SLOGs from a pool, but that work hasn't
  yet found it's way in FreeBSD yet.
 
 
  - Sincerely,
  Dan Naumov

 OK final question: if/when I go about adding more disks to the system
 and want redundancy, am I right in thinking that: ZFS pool of
 disk1+disk2 mirror + disk3+disk4 mirror (a la RAID10) would completely
 murder my write and read performance even way below the current 28mb/s
 / 50mb/s I am seeing with 2 disks on that PCI controller and that in
 order to have the least negative impact, I should simply have 2
 independent mirrors in 2 independent pools (with the 5th disk slot in
 the NAS given to a non-redundant single disk running off the one
 available SATA port on the motherboard)?

 - Sincerely,
 Dan Naumov
 ___
 freebsd...@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-fs
 To unsubscribe, send any mail to freebsd-fs-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Pete French
 I like to use pci-x with aoc-sat2-mv8 cards or pci-e cardsthat way you
 get a lot more bandwidth..

I would goalong with that - I have precisely the same controller, with
a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
meg/second out of them if I try. My controller is, however on PCI-X, not
PCI. It's a shame PCI-X appears to have gone the way of the dinosaur :-(

-pete.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Artem Belevich
aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
controllers when I tried it with 6 and 8 disks.
I think the problem is that MV8 only does 32K per transfer and that
does seem to matter when you have 8 drives hooked up to it. I don't
have hard numbers, but peak throughput of MV8 with 8-disk raidz2 was
noticeably lower than that of LSI1068 in the same configuration. Both
LSI1068 and MV2 were on the same PCI-X bus. It could be a driver
limitation. The driver for Marvel SATA controllers in NetBSD seems a
bit more advanced compared to what's in FreeBSD.

I wish intel would make cheap multi-port PCIe SATA card based on their
AHCI controllers.

--Artem

On Mon, Jan 25, 2010 at 3:29 AM, Pete French
petefre...@ticketswitch.com wrote:
 I like to use pci-x with aoc-sat2-mv8 cards or pci-e cardsthat way you
 get a lot more bandwidth..

 I would goalong with that - I have precisely the same controller, with
 a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
 meg/second out of them if I try. My controller is, however on PCI-X, not
 PCI. It's a shame PCI-X appears to have gone the way of the dinosaur :-(

 -pete.
 ___
 freebsd-sta...@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Alexander Motin
Artem Belevich wrote:
 aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
 controllers when I tried it with 6 and 8 disks.
 I think the problem is that MV8 only does 32K per transfer and that
 does seem to matter when you have 8 drives hooked up to it. I don't
 have hard numbers, but peak throughput of MV8 with 8-disk raidz2 was
 noticeably lower than that of LSI1068 in the same configuration. Both
 LSI1068 and MV2 were on the same PCI-X bus. It could be a driver
 limitation. The driver for Marvel SATA controllers in NetBSD seems a
 bit more advanced compared to what's in FreeBSD.

I also wouldn't recommend to use Marvell 88SXx0xx controllers now. While
potentially they are interesting, lack of documentation and numerous
hardware bugs make existing FreeBSD driver very limited there.

 I wish intel would make cheap multi-port PCIe SATA card based on their
 AHCI controllers.

Indeed. Intel on-board AHCI SATA controllers are fastest from all I have
tested. Unluckily, they are not producing discrete versions. :(

Now, if discrete solution is really needed, I would still recommend
SiI3124, but with proper PCI-X 64bit/133MHz bus or built-in PCIe x8
bridge. They are fast and have good new siis driver.

 On Mon, Jan 25, 2010 at 3:29 AM, Pete French
 petefre...@ticketswitch.com wrote:
 I like to use pci-x with aoc-sat2-mv8 cards or pci-e cardsthat way you
 get a lot more bandwidth..
 I would goalong with that - I have precisely the same controller, with
 a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
 meg/second out of them if I try. My controller is, however on PCI-X, not
 PCI. It's a shame PCI-X appears to have gone the way of the dinosaur :-(

-- 
Alexander Motin
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Dan Naumov
On Mon, Jan 25, 2010 at 7:40 PM, Alexander Motin m...@freebsd.org wrote:
 Artem Belevich wrote:
 aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
 controllers when I tried it with 6 and 8 disks.
 I think the problem is that MV8 only does 32K per transfer and that
 does seem to matter when you have 8 drives hooked up to it. I don't
 have hard numbers, but peak throughput of MV8 with 8-disk raidz2 was
 noticeably lower than that of LSI1068 in the same configuration. Both
 LSI1068 and MV2 were on the same PCI-X bus. It could be a driver
 limitation. The driver for Marvel SATA controllers in NetBSD seems a
 bit more advanced compared to what's in FreeBSD.

 I also wouldn't recommend to use Marvell 88SXx0xx controllers now. While
 potentially they are interesting, lack of documentation and numerous
 hardware bugs make existing FreeBSD driver very limited there.

 I wish intel would make cheap multi-port PCIe SATA card based on their
 AHCI controllers.

 Indeed. Intel on-board AHCI SATA controllers are fastest from all I have
 tested. Unluckily, they are not producing discrete versions. :(

 Now, if discrete solution is really needed, I would still recommend
 SiI3124, but with proper PCI-X 64bit/133MHz bus or built-in PCIe x8
 bridge. They are fast and have good new siis driver.

 On Mon, Jan 25, 2010 at 3:29 AM, Pete French
 petefre...@ticketswitch.com wrote:
 I like to use pci-x with aoc-sat2-mv8 cards or pci-e cardsthat way you
 get a lot more bandwidth..
 I would goalong with that - I have precisely the same controller, with
 a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
 meg/second out of them if I try. My controller is, however on PCI-X, not
 PCI. It's a shame PCI-X appears to have gone the way of the dinosaur :-(

 --
 Alexander Motin

Alexander, since you seem to be experienced in the area, what do you
think of these 2 for use in a FreeBSD8 ZFS NAS:

http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=HIPMI=Y

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Alexander Motin
Dan Naumov wrote:
 Alexander, since you seem to be experienced in the area, what do you
 think of these 2 for use in a FreeBSD8 ZFS NAS:
 
 http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
 http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=HIPMI=Y

Unluckily I haven't yet touched Atom family close yet, so I can't say
about it's performance. But higher desktop level (even bit old) ICH9R
chipset there is IMHO a good option. It is MUCH better then ICH7, often
used with previous Atoms. If I had nice small Mini-ITX case with 6 drive
bays, I would definitely look for some board like that to build home
storage.

-- 
Alexander Motin
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Dan Naumov
On Mon, Jan 25, 2010 at 8:32 PM, Alexander Motin m...@freebsd.org wrote:
 Dan Naumov wrote:
 Alexander, since you seem to be experienced in the area, what do you
 think of these 2 for use in a FreeBSD8 ZFS NAS:

 http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
 http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=HIPMI=Y

 Unluckily I haven't yet touched Atom family close yet, so I can't say
 about it's performance. But higher desktop level (even bit old) ICH9R
 chipset there is IMHO a good option. It is MUCH better then ICH7, often
 used with previous Atoms. If I had nice small Mini-ITX case with 6 drive
 bays, I would definitely look for some board like that to build home
 storage.

 --
 Alexander Motin

CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6 native
SATA ports should be enough for my needs. AFAIK, the Intel 82574L
network cards included on those are also very well supported?

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Chris Whitehouse

Dan Naumov wrote:


CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6 native
SATA ports should be enough for my needs. AFAIK, the Intel 82574L
network cards included on those are also very well supported?



These might be interesting then
www.fit-pc.com
The Intel US15W SCH chipset or System Controller Hub as it's called is
mentioned in hardware notes for 8.0R and 7.2R but only for snd_hda, I
don't know if this means other functions are supported or not. This
thread says it is supported
http://mail-index.netbsd.org/port-i386/2010/01/03/msg001695.html

Chris

ps I removed some of the recipients from the recipients list as my 
original post was held for moderation because of Too many recipients to 
the message



- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org




___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Alexander Motin
Chris Whitehouse wrote:
 Dan Naumov wrote:

 CPU-performance-wise, I am not really worried. The current system is
 an Atom 330 and even that is a bit overkill for what I do with it and
 from what I am seeing, the new Atom D510 used on those boards is a
 tiny bit faster. What I want and care about for this system are
 reliability, stability, low power use, quietness and fast disk
 read/write speeds. I've been hearing some praise of ICH9R and 6 native
 SATA ports should be enough for my needs. AFAIK, the Intel 82574L
 network cards included on those are also very well supported?
 
 These might be interesting then
 www.fit-pc.com
 The Intel US15W SCH chipset or System Controller Hub as it's called is
 mentioned in hardware notes for 8.0R and 7.2R but only for snd_hda, I
 don't know if this means other functions are supported or not. This
 thread says it is supported
 http://mail-index.netbsd.org/port-i386/2010/01/03/msg001695.html

Intel US15W (SCH) chipset heavily stripped and tuned for netbooks. It
has no SATA, only one PATA channel. It is mostly supported by FreeBSD,
but with exception of video, which makes it close to useless. it has
only one benefit - low power consumption.

-- 
Alexander Motin
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Chris Whitehouse

Alexander Motin wrote:

Chris Whitehouse wrote:

Dan Naumov wrote:

CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6 native
SATA ports should be enough for my needs. AFAIK, the Intel 82574L
network cards included on those are also very well supported?

These might be interesting then
www.fit-pc.com
The Intel US15W SCH chipset or System Controller Hub as it's called is
mentioned in hardware notes for 8.0R and 7.2R but only for snd_hda, I
don't know if this means other functions are supported or not. This
thread says it is supported
http://mail-index.netbsd.org/port-i386/2010/01/03/msg001695.html


Intel US15W (SCH) chipset heavily stripped and tuned for netbooks. It
has no SATA, only one PATA channel. It is mostly supported by FreeBSD,
but with exception of video, which makes it close to useless. it has
only one benefit - low power consumption.

The intel spec sheet does say single PATA but according to the fit-pc 
website it has SATA and miniSD. Still as you say without video support 
it's not much use, which is useful to know as I had been looking at 
these. Ok I will go away now :O


Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Daniel O'Connor
On Tue, 26 Jan 2010, Dan Naumov wrote:
 CPU-performance-wise, I am not really worried. The current system is
 an Atom 330 and even that is a bit overkill for what I do with it and
 from what I am seeing, the new Atom D510 used on those boards is a
 tiny bit faster. What I want and care about for this system are
 reliability, stability, low power use, quietness and fast disk
 read/write speeds. I've been hearing some praise of ICH9R and 6
 native SATA ports should be enough for my needs. AFAIK, the Intel
 82574L network cards included on those are also very well supported?

You might want to consider an Athlon (maybe underclock it) - the AMD IXP 
700/800 south bridge seems to work well with FreeBSD (in my 
experience).

These boards (eg Gigabyte GA-MA785GM-US2H) have 6 SATA ports (one may be 
eSATA though) and PATA, they seem ideal really.. You can use PATA with 
CF to boot and connect 5 disks plus a DVD drive.

The CPU is not fanless however, but the other stuff is, on the plus side 
you won't have to worry about CPU power :)

Also, the onboard video works well with radeonhd and is quite fast.

One other downside is the onboard network isn't great (Realtek) but I 
put an em card in mine.

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
The nice thing about standards is that there
are so many of them to choose from.
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 7:05 PM, Jason Edwards sub.m...@gmail.com wrote:
 Hi Dan,

 I read on FreeBSD mailinglist you had some performance issues with ZFS.
 Perhaps i can help you with that.

 You seem to be running a single mirror, which means you won't have any speed
 benefit regarding writes, and usually RAID1 implementations offer little to
 no acceleration to read requests also; some even just read from the master
 disk and don't touch the 'slave' mirrored disk unless when writing. ZFS is
 alot more modern however, although i did not test performance of its mirror
 implementation.

 But, benchmarking I/O can be tricky:

 1) you use bonnie, but bonnie's tests are performed without a 'cooldown'
 period between the tests; meaning that when test 2 starts, data from test 1
 is still being processed. For single disks and simple I/O this is not so
 bad, but for large write-back buffers and more complex I/O buffering, this
 may be inappropriate. I had patched bonnie some time in the past, but if you
 just want a MB/s number you can use DD for that.

 2) The diskinfo tiny benchmark is single queue only i assume, meaning that
 it would not scale well or at all on RAID-arrays. Actual filesystems on
 RAID-arrays use multiple-queue; meaning it would not read one sector at a
 time, but read 8 blocks (of 16KiB) ahead; this is called read-ahead and
 for traditional UFS filesystems its controlled by the sysctl vfs.read_max
 variable. ZFS works differently though, but you still need a real
 benchmark.

 3) You need low-latency hardware; in particular, no PCI controller should be
 used. Only PCI-express based controllers or chipset-integrated Serial ATA
 cotrollers have proper performance. PCI can hurt performance very badly, and
 has high interrupt CPU usage. Generally you should avoid PCI. PCI-express is
 fine though, its a completely different interface that is in many ways the
 opposite of what PCI was.

 4) Testing actual realistic I/O performance (in IOps) is very difficult. But
 testing sequential performance should be alot easier. You may try using dd
 for this.


 For example, you can use dd on raw devices:

 dd if=/dev/ad4 of=/dev/null bs=1M count=1000

 I will explain each parameter:

 if=/dev/ad4 is the input file, the read source

 of=/dev/null is the output file, the write destination. /dev/null means it
 just goes no-where; so this is a read-only benchmark

 bs=1M is the blocksize, howmuch data to transfer per time. default is 512 or
 the sector size; but that's very slow. A value between 64KiB and 1024KiB is
 appropriate. bs=1M will select 1MiB or 1024KiB.

 count=1000 means transfer 1000 pieces, and with bs=1M that means 1000 * 1MiB
 = 1000MiB.



 This example was raw reading sequentially from the start of the device
 /dev/ad4. If you want to test RAIDs, you need to work at the filesystem
 level. You can use dd for that too:

 dd if=/dev/zero of=/path/to/ZFS/mount/zerofile.000 bs=1M count=2000

 This command will read from /dev/zero (all zeroes) and write to a file on
 ZFS-mounted filesystem, it will create the file zerofile.000 and write
 2000MiB of zeroes to that file.
 So this command tests write-performance of the ZFS-mounted filesystem. To
 test read performance, you need to clear caches first by unmounting that
 filesystem and re-mounting it again. This would free up memory containing
 parts of the filesystem as cached (reported in top as Inact(ive) instead
 of Free).

 Please do make sure you double-check a dd command before running it, and run
 as normal user instead of root. A wrong dd command may write to the wrong
 destination and do things you don't want. The only real thing you need to
 check is the write destination (of=). That's where dd is going to write
 to, so make sure its the target you intended. A common mistake made by
 myself was to write dd of=... if=... (starting with of instead of if) and
 thus actually doing something the other way around than what i was meant to
 do. This can be disastrous if you work with live data, so be careful! ;-)

 Hope any of this was helpful. During the dd benchmark, you can of course
 open a second SSH terminal and start gstat to see the devices current I/O
 stats.

 Kind regards,
 Jason

Hi and thanks for your tips, I appreciate it :)

[j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test1 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 36.206372 secs (29656156 bytes/sec)

[j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test2 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec)

This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
bonnie results. It also sadly seems to confirm the very slow speed :(
The disks are attached to a 4-port Sil3124 controller and again, my
Windows benchmarks showing 65mb/s+ were done on exact same machine,
with same disks attached 

Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 7:42 PM, Dan Naumov dan.nau...@gmail.com wrote:
 On Sun, Jan 24, 2010 at 7:05 PM, Jason Edwards sub.m...@gmail.com wrote:
 Hi Dan,

 I read on FreeBSD mailinglist you had some performance issues with ZFS.
 Perhaps i can help you with that.

 You seem to be running a single mirror, which means you won't have any speed
 benefit regarding writes, and usually RAID1 implementations offer little to
 no acceleration to read requests also; some even just read from the master
 disk and don't touch the 'slave' mirrored disk unless when writing. ZFS is
 alot more modern however, although i did not test performance of its mirror
 implementation.

 But, benchmarking I/O can be tricky:

 1) you use bonnie, but bonnie's tests are performed without a 'cooldown'
 period between the tests; meaning that when test 2 starts, data from test 1
 is still being processed. For single disks and simple I/O this is not so
 bad, but for large write-back buffers and more complex I/O buffering, this
 may be inappropriate. I had patched bonnie some time in the past, but if you
 just want a MB/s number you can use DD for that.

 2) The diskinfo tiny benchmark is single queue only i assume, meaning that
 it would not scale well or at all on RAID-arrays. Actual filesystems on
 RAID-arrays use multiple-queue; meaning it would not read one sector at a
 time, but read 8 blocks (of 16KiB) ahead; this is called read-ahead and
 for traditional UFS filesystems its controlled by the sysctl vfs.read_max
 variable. ZFS works differently though, but you still need a real
 benchmark.

 3) You need low-latency hardware; in particular, no PCI controller should be
 used. Only PCI-express based controllers or chipset-integrated Serial ATA
 cotrollers have proper performance. PCI can hurt performance very badly, and
 has high interrupt CPU usage. Generally you should avoid PCI. PCI-express is
 fine though, its a completely different interface that is in many ways the
 opposite of what PCI was.

 4) Testing actual realistic I/O performance (in IOps) is very difficult. But
 testing sequential performance should be alot easier. You may try using dd
 for this.


 For example, you can use dd on raw devices:

 dd if=/dev/ad4 of=/dev/null bs=1M count=1000

 I will explain each parameter:

 if=/dev/ad4 is the input file, the read source

 of=/dev/null is the output file, the write destination. /dev/null means it
 just goes no-where; so this is a read-only benchmark

 bs=1M is the blocksize, howmuch data to transfer per time. default is 512 or
 the sector size; but that's very slow. A value between 64KiB and 1024KiB is
 appropriate. bs=1M will select 1MiB or 1024KiB.

 count=1000 means transfer 1000 pieces, and with bs=1M that means 1000 * 1MiB
 = 1000MiB.



 This example was raw reading sequentially from the start of the device
 /dev/ad4. If you want to test RAIDs, you need to work at the filesystem
 level. You can use dd for that too:

 dd if=/dev/zero of=/path/to/ZFS/mount/zerofile.000 bs=1M count=2000

 This command will read from /dev/zero (all zeroes) and write to a file on
 ZFS-mounted filesystem, it will create the file zerofile.000 and write
 2000MiB of zeroes to that file.
 So this command tests write-performance of the ZFS-mounted filesystem. To
 test read performance, you need to clear caches first by unmounting that
 filesystem and re-mounting it again. This would free up memory containing
 parts of the filesystem as cached (reported in top as Inact(ive) instead
 of Free).

 Please do make sure you double-check a dd command before running it, and run
 as normal user instead of root. A wrong dd command may write to the wrong
 destination and do things you don't want. The only real thing you need to
 check is the write destination (of=). That's where dd is going to write
 to, so make sure its the target you intended. A common mistake made by
 myself was to write dd of=... if=... (starting with of instead of if) and
 thus actually doing something the other way around than what i was meant to
 do. This can be disastrous if you work with live data, so be careful! ;-)

 Hope any of this was helpful. During the dd benchmark, you can of course
 open a second SSH terminal and start gstat to see the devices current I/O
 stats.

 Kind regards,
 Jason

 Hi and thanks for your tips, I appreciate it :)

 [j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test1 bs=1M count=1024
 1024+0 records in
 1024+0 records out
 1073741824 bytes transferred in 36.206372 secs (29656156 bytes/sec)

 [j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test2 bs=1M count=4096
 4096+0 records in
 4096+0 records out
 4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec)

 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my
 

Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 8:12 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Sun, 24 Jan 2010, Dan Naumov wrote:

 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my
 Windows benchmarks showing 65mb/s+ were done on exact same machine,
 with same disks attached to the same controller. Only difference was
 that in Windows the disks weren't in a mirror configuration but were
 tested individually. I do understand that a mirror setup offers
 roughly the same write speed as individual disk, while the read speed
 usually varies from equal to individual disk speed to nearly the
 throughput of both disks combined depending on the implementation,
 but there is no obvious reason I am seeing why my setup offers both
 read and write speeds roughly 1/3 to 1/2 of what the individual disks
 are capable of. Dmesg shows:

 There is a mistatement in the above in that a mirror setup offers roughly
 the same write speed as individual disk.  It is possible for a mirror setup
 to offer a similar write speed to an individual disk, but it is also quite
 possible to get 1/2 (or even 1/3) the speed. ZFS writes to a mirror pair
 requires two independent writes.  If these writes go down independent I/O
 paths, then there is hardly any overhead from the 2nd write.  If the writes
 go through a bandwidth-limited shared path then they will contend for that
 bandwidth and you will see much less write performance.

 As a simple test, you can temporarily remove the mirror device from the pool
 and see if the write performance dramatically improves. Before doing that,
 it is useful to see the output of 'iostat -x 30' while under heavy write
 load to see if one device shows a much higher svc_t value than the other.

Ow, ow, WHOA:

atombsd# zpool offline tank ad8s1a

[j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test3 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 16.826016 secs (63814382 bytes/sec)

Offlining one half of the mirror bumps DD write speed from 28mb/s to
64mb/s! Let's see how Bonnie results change:

Mirror with both parts attached:

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 8192 18235 46.7 23137 19.9 13927 13.6 24818 49.3 44919 17.3 134.3  2.1

Mirror with 1 half offline:

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 1024 22888 58.0 41832 35.1 22764 22.0 26775 52.3 54233 18.3 166.0  1.6

Ok, the Bonnie results have improved, but only very little.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 8:34 PM, Jason Edwards sub.m...@gmail.com wrote:
 ZFS writes to a mirror pair
 requires two independent writes.  If these writes go down independent I/O
 paths, then there is hardly any overhead from the 2nd write.  If the
 writes
 go through a bandwidth-limited shared path then they will contend for that
 bandwidth and you will see much less write performance.

 What he said may confirm my suspicion on PCI. So if you could try the same
 with real Serial ATA via chipset or PCI-e controller you can confirm this
 story. I would be very interested. :P

 Kind regards,
 Jason


This wouldn't explain why ZFS mirror on 2 disks directly, on the exact
same controller (with the OS running off a separate disks) results in
expected performance, while having the OS run off/on a ZFS mirror
running on top of MBR-partitioned disks, on the same controller,
results in very low speed.

- Dan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Alexander Motin
Dan Naumov wrote:
 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my
 Windows benchmarks showing 65mb/s+ were done on exact same machine,
 with same disks attached to the same controller. Only difference was
 that in Windows the disks weren't in a mirror configuration but were
 tested individually. I do understand that a mirror setup offers
 roughly the same write speed as individual disk, while the read speed
 usually varies from equal to individual disk speed to nearly the
 throughput of both disks combined depending on the implementation,
 but there is no obvious reason I am seeing why my setup offers both
 read and write speeds roughly 1/3 to 1/2 of what the individual disks
 are capable of. Dmesg shows:
 
 atapci0: SiI 3124 SATA300 controller port 0x1000-0x100f mem
 0x90108000-0x9010807f,0x9010-0x90107fff irq 21 at device 0.0 on
 pci4
 ad8: 1907729MB WDC WD20EADS-32R6B0 01.00A01 at ata4-master SATA300
 ad10: 1907729MB WDC WD20EADS-00R6B0 01.00A01 at ata5-master SATA300

8.0-RELEASE, and especially 8-STABLE provide alternative, much more
functional driver for this controller, named siis(4). If your SiI3124
card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
fast (up to 1GB/s was measured).

-- 
Alexander Motin
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin m...@freebsd.org wrote:
 Dan Naumov wrote:
 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my
 Windows benchmarks showing 65mb/s+ were done on exact same machine,
 with same disks attached to the same controller. Only difference was
 that in Windows the disks weren't in a mirror configuration but were
 tested individually. I do understand that a mirror setup offers
 roughly the same write speed as individual disk, while the read speed
 usually varies from equal to individual disk speed to nearly the
 throughput of both disks combined depending on the implementation,
 but there is no obvious reason I am seeing why my setup offers both
 read and write speeds roughly 1/3 to 1/2 of what the individual disks
 are capable of. Dmesg shows:

 atapci0: SiI 3124 SATA300 controller port 0x1000-0x100f mem
 0x90108000-0x9010807f,0x9010-0x90107fff irq 21 at device 0.0 on
 pci4
 ad8: 1907729MB WDC WD20EADS-32R6B0 01.00A01 at ata4-master SATA300
 ad10: 1907729MB WDC WD20EADS-00R6B0 01.00A01 at ata5-master SATA300

 8.0-RELEASE, and especially 8-STABLE provide alternative, much more
 functional driver for this controller, named siis(4). If your SiI3124
 card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
 fast (up to 1GB/s was measured).

 --
 Alexander Motin

Sadly, it seems that utilizing the new siis driver doesn't do much good:

Before utilizing siis:

iozone -s 4096M -r 512 -i0 -i1
random
randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
writeread  rewrite read   fwrite frewrite   fread  freread
 4194304 512   28796   287665161050695

After enabling siis in loader.conf (and ensuring the disks show up as ada):

iozone -s 4096M -r 512 -i0 -i1

random
randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
writeread  rewrite read   fwrite frewrite   fread  freread
 4194304 512   28781   288974721450540

I've checked with the manufacturer and it seems that the Sil3124 in
this NAS is indeed a PCI card. More info on the card in question is
available at http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
I have the card described later on the page, the one with 4 SATA ports
and no eSATA. Alright, so it being PCI is probably a bottleneck in
some ways, but that still doesn't explain the performance THAT bad,
considering that same hardware, same disks, same disk controller push
over 65mb/s in both reads and writes in Win2008. And agian, I am
pretty sure that I've had close to expected results when I was
booting an UFS FreeBSD installation off an SSD (attached directly to
SATA port on the motherboard) while running the same kinds of
benchmarks with Bonnie and DD on a ZFS mirror made directly on top of
2 raw disks.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Mon, Jan 25, 2010 at 2:14 AM, Dan Naumov dan.nau...@gmail.com wrote:
 On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin m...@freebsd.org wrote:
 Dan Naumov wrote:
 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my
 Windows benchmarks showing 65mb/s+ were done on exact same machine,
 with same disks attached to the same controller. Only difference was
 that in Windows the disks weren't in a mirror configuration but were
 tested individually. I do understand that a mirror setup offers
 roughly the same write speed as individual disk, while the read speed
 usually varies from equal to individual disk speed to nearly the
 throughput of both disks combined depending on the implementation,
 but there is no obvious reason I am seeing why my setup offers both
 read and write speeds roughly 1/3 to 1/2 of what the individual disks
 are capable of. Dmesg shows:

 atapci0: SiI 3124 SATA300 controller port 0x1000-0x100f mem
 0x90108000-0x9010807f,0x9010-0x90107fff irq 21 at device 0.0 on
 pci4
 ad8: 1907729MB WDC WD20EADS-32R6B0 01.00A01 at ata4-master SATA300
 ad10: 1907729MB WDC WD20EADS-00R6B0 01.00A01 at ata5-master SATA300

 8.0-RELEASE, and especially 8-STABLE provide alternative, much more
 functional driver for this controller, named siis(4). If your SiI3124
 card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
 fast (up to 1GB/s was measured).

 --
 Alexander Motin

 Sadly, it seems that utilizing the new siis driver doesn't do much good:

 Before utilizing siis:

 iozone -s 4096M -r 512 -i0 -i1
                                                            random
 random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read
 write    read  rewrite     read   fwrite frewrite   fread  freread
         4194304     512   28796   28766    51610    50695

 After enabling siis in loader.conf (and ensuring the disks show up as ada):

 iozone -s 4096M -r 512 -i0 -i1

                                                            random
 random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read
 write    read  rewrite     read   fwrite frewrite   fread  freread
         4194304     512   28781   28897    47214    50540

Just to add to the numbers above, exact same benchmark, on 1 disk
(detached 2nd disk from the mirror) while using the siis driver:

random
randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
writeread  rewrite read   fwrite frewrite   fread  freread
 4194304 512   57760   563716886774047


- Dan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Alexander Motin
Dan Naumov wrote:
 On Mon, Jan 25, 2010 at 2:14 AM, Dan Naumov dan.nau...@gmail.com wrote:
 On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin m...@freebsd.org wrote:
 Dan Naumov wrote:
 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my
 Windows benchmarks showing 65mb/s+ were done on exact same machine,
 with same disks attached to the same controller. Only difference was
 that in Windows the disks weren't in a mirror configuration but were
 tested individually. I do understand that a mirror setup offers
 roughly the same write speed as individual disk, while the read speed
 usually varies from equal to individual disk speed to nearly the
 throughput of both disks combined depending on the implementation,
 but there is no obvious reason I am seeing why my setup offers both
 read and write speeds roughly 1/3 to 1/2 of what the individual disks
 are capable of. Dmesg shows:

 atapci0: SiI 3124 SATA300 controller port 0x1000-0x100f mem
 0x90108000-0x9010807f,0x9010-0x90107fff irq 21 at device 0.0 on
 pci4
 ad8: 1907729MB WDC WD20EADS-32R6B0 01.00A01 at ata4-master SATA300
 ad10: 1907729MB WDC WD20EADS-00R6B0 01.00A01 at ata5-master SATA300
 8.0-RELEASE, and especially 8-STABLE provide alternative, much more
 functional driver for this controller, named siis(4). If your SiI3124
 card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
 fast (up to 1GB/s was measured).

 --
 Alexander Motin
 Sadly, it seems that utilizing the new siis driver doesn't do much good:

 Before utilizing siis:

 iozone -s 4096M -r 512 -i0 -i1
random
 randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
 writeread  rewrite read   fwrite frewrite   fread  freread
 4194304 512   28796   287665161050695

 After enabling siis in loader.conf (and ensuring the disks show up as ada):

 iozone -s 4096M -r 512 -i0 -i1

random
 randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
 writeread  rewrite read   fwrite frewrite   fread  freread
 4194304 512   28781   288974721450540
 
 Just to add to the numbers above, exact same benchmark, on 1 disk
 (detached 2nd disk from the mirror) while using the siis driver:
 
 random
 randombkwd   record   stride
   KB  reclen   write rewritereadrereadread
 writeread  rewrite read   fwrite frewrite   fread  freread
  4194304 512   57760   563716886774047

If both parts of mirror uses same controller, it doubles it's bus
traffic. That may reduce bandwidth twice.

The main benefit of siis(4) is a command queuing. You should receive
more benefits on multithread random I/O.

-- 
Alexander Motin
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org