Thanks for the reply.

Some background.. The server is fresh installed. Right before running the tests, the pools are newly created.


Some comments below....

On 10/31/2011 10:33 PM, Paul Kraus wrote:
A couple points in line below ...

On Wed, Oct 26, 2011 at 10:56 PM, weiliam.hong<weiliam.h...@gmail.com>  wrote:

I have a fresh installation of OI151a:
- SM X8DTH, 12GB RAM, LSI 9211-8i (latest IT-mode firmware)
- pool_A : SG ES.2 Constellation (SAS)
- pool_B : WD RE4 (SATA)
- no settings in /etc/system
Load generation via 2 concurrent dd streams:
--------------------------------------------------
dd if=/dev/zero of=/pool_A/bigfile bs=1024k count=1000000
dd if=/dev/zero of=/pool_B/bigfile bs=1024k count=1000000
dd generates "straight line" data, all sequential.
yes.
                capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
pool_A      15.5G  2.70T      0     50      0  6.29M
   mirror    15.5G  2.70T      0     50      0  6.29M
     c7t5000C50035062EC1d0      -      -      0     62      0  7.76M
     c8t5000C50034C03759d0      -      -      0     50      0  6.29M
----------  -----  -----  -----  -----  -----  -----
pool_B      28.0G  1.79T      0  1.07K      0   123M
   mirror    28.0G  1.79T      0  1.07K      0   123M
     c1t50014EE057FCD628d0      -      -      0  1.02K      0   123M
     c2t50014EE6ABB89957d0      -      -      0  1.02K      0   123M
What does `iostat -xnM c7t5000C50035062EC1d0 c8t5000C50034C03759d0
c1t50014EE057FCD628d0 c2t50014EE6ABB89957d0 1` show ? That will give
you much more insight into the OS<->  drive interface.
iostat numbers are similar. I will try to get the figures, a bit hard now as the hardware has been taken off my hands.
What does `fsstat /pool_A /pool_B 1` show ? That will give you much
more insight into the application<->  filesystem interface. In this
case "application" == "dd".

In my opinion, `zpool iostat -v` is somewhat limited in what you can
learn from it. The only thing I use it for these days is to see
distribution of data and I/O between vdevs.

Questions:
1. Why does SG SAS drives degrade to<10 MB/s while WD RE4 remain consistent
at>100MB/s after 10-15 min?
Something changes to slow them down ? Sorry for the obvious retort :-)
See what iostat has to say. If the %b column is climbing, then you are
slowly saturating the drives themselves, for example.
There is no other workload or user using this system. The system is freshly installed, booted and the pools newly created.
2. Why does SG SAS drive show only 70+ MB/s where is the published figures
are>  100MB/s refer here?
"published" where ?
http://www.seagate.com/www/en-au/products/enterprise-hard-drives/constellation-es/constellation-es-2/#tTabContentSpecifications


  What does a "dd" to the device itself (no ZFS, no
FS at all) show ? For example, `dd if=/dev/zero
of=/dev/dsk/c7t5000C50035062EC1d0s0 bs=1024k count=1000000` (after you
destroy the zpool and use format to create an s0 of the entire disk).
This will test the device driver / HBA / drive with no FS or volume
manager involved. Use iostat to watch the OS<->  drive interface.
Perhaps the test below is useful to understand the observation.

*dd test on slice 0*
dd if=/dev/zero of=/dev/rdsk/c1t5000C50035062EC1d0s0 bs=1024k

                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  155.4    0.0 159129.7  0.0  1.0    0.0    6.3   0  97 c1
0.0 155.4 0.0 159129.7 0.0 1.0 0.0 6.3 0 97 c1t5000C50035062EC1d0 <== this is best case

*dd test on slice 6*
**dd if=/dev/zero of=/dev/rdsk/c1t5000C50035062EC1d0s6 bs=1024k

                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   21.4    0.0 21913.6  0.0  1.0    0.0   46.6   0 100 c1
0.0 21.4 0.0 21913.6 0.0 1.0 0.0 46.6 0 100 c1t5000C50035062EC1d0 <== only 20+MB/s !!!

*Partition table info*

Part      Tag    Flag     First Sector          Size          Last Sector
  0        usr    wm               256       100.00GB           209715455
  1 unassigned    wm                 0            0                0
  2 unassigned    wm                 0            0                0
  3 unassigned    wm                 0            0                0
  4 unassigned    wm                 0            0                0
  5 unassigned    wm                 0            0                0
  6        usr    wm        5650801295       100.00GB           5860516749
  8   reserved    wm        5860516751         8.00MB           5860533134

Referring to pg 18 of
http://www.seagate.com/staticfiles/support/docs/manual/enterprise/Constellation%203_5%20in/100628615f.pdf
The transfer rate is supposed range from 68 - 155 MB/s. Why is the inner cylinders only showing 20+ MB/s ? Am I testing and understanding this wrongly ?



3. All 4 drives are connected to a single HBA, so I assume the mpt_sas
driver is used. Are SAS and SATA drives handled differently ?
I assume there are (at least) four ports on the HBA ? I assume this
from the c7, c8, c1, c2 device names. That means that the drives
should _not_ be affecting each other. As another poster mentioned, the
behavior of the interface chip may change based on which drives are
seeing I/O, but I doubt that would be this big of a factor.
Actually I later put in the WD drives purely for comparison, the behavior was at first observed with only the SG SAS drives installed.
This is a test server, so any ideas to try and help me understand greatly
appreciated.
What do real benchmarks (iozone, filebench, orion) show ?


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to