On 03/25/2013 06:39 PM, Franz Schober wrote:
> Hi,

A bunch more things came to my mind, see comments below:

> Test: Writing a 1 GB File
> time dd if=/dev/zero of=/tmp/testfile1 bs=128k count=8k
> Mean value of 3 tests on every system.
> (I also tried IOZone but the simple dd test seems to show the problem)

Make sure you write a lot more data per test run. Give it like 50-60GB
at least. Btw: "/tmp" is tmpfs, which resides on your swap device, which
is decided not a high-performance file system store.

> New System:
> 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz      2,1 seconds (around 487
> MByte/s)
> Mainboard Supermicro X9QR7-TF+
> 256 GByte Ram in 16 x 16 GByte Registered ECC DDR3
> 
> Other Systems:
> Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20GHz     0,79 s (1296 MByte/s)
> Intel(R) Xeon(R) CPU       E5606  @ 2.13GHz     0,9s   (1140 MByte/s)
> Intel(R) Core(TM) i5-2400S CPU @ 2.50GHz     0,69s (1485 MByte/s)
> 
> The reason why i am asking is because:
> When i export  a ZVOL on a ZFS pool (2x6 Disks RAIDZ2 striped +ZIL)
> which is writing with about 750MB/s native

Remember this is userspace throughput, but your zpool actually has to do
a parity and reed-solomon computation on that 750MB/s and write:
750 MB/s * (6/4) = 1125 MB/s
Given that you have 12 drives there, that comes to roughly 93.75 MB/s -
that appears to be normal performance for spinning rust.

iostat -Dn <pooldevices> 1

Should give you a much better overview over how busy your drives are. If
you are seeing close to 100% busy on the drives, no amount of memory
tuning will help you (your drives are saturated, not memory).

>, over FC then i only get 200 MB/s on the initator.

What kind of client tests are you performing? Check via iostat how busy
your drives are and using your FC interface tools (don't know any
myself) how busy your FC links are. I've seen a 4GbFC link top at
200MB/s in write throughput even on very large SANs (IBM "Shark" with
several hundred 15k FC spindles).

> I then tried to export RAM disks (they are slower then the tmpfs) and
> then tmpfs files, but the
> performance over FC was the same.

This should tell you that it's quite likely your memory subsystem isn't
the issue - check your fabric.

> After that, I saw that the whole
> system performs much worse than
> my older ones (which have no FC).

Test the older ones over FC then too, so that you can get a similar
point of reference. Otherwise you're comparing apples and oranges.

Cheers,
--
Saso


-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com

Reply via email to