Re: [zfs-discuss] Sun T3-2 and ZFS on JBODS

2011-03-06 Thread Marion Hakanson
sigbj...@nixtra.com said:
 I will do some testing on the loadbalance on/off. We have nearline SAS disks,
 which does have dual path from the disk, however it's still just 7200rpm
 drives.
 
 Are you using SATA , SAS or SAS-nearline in your array? Do you have multiple
 SAS connections to your arrays, or do you use a single connection per array
 only? 

We have four Dell MD1200's connected to three Solaris-10 systems.  Three
of the MD1200's have nearline-SAS 2TB 7200RPM drives, and one has SAS 300GB
15000RPM drives.  All the MD1200's are connected with dual SAS modules to
a dual-port HBA on their respective servers (one setup is with two MD1200's
daisy-chained, but again using dual SAS modules  cables).

Both types of drives suffer super-slow writes (but reasonable reads) when
loadbalance=roundrobin is in effect.  E.g 280 MB/sec sequential reads, and
28MB/sec sequential writes, for the 15kRPM SAS drives I tested last week.
We don't see this extreme slowness on our dual-path Sun J4000 JBOD's, but
those all have SATA drives (with the dual-port interposers inside the
drive sleds).

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun T3-2 and ZFS on JBODS

2011-03-03 Thread Sigbjorn Lie
Hi,

This turned out to be a scheduler issue. The system was still running the 
default TS scheduler. By
switching to the FSS scheduler the performance was back to what it was before 
the system was
reinstalled.

When using the TS scheduler the writes would not evenly spread across the 
drives. We have in total
60 drives in the pool, and a few of the drives would peak in write throughput, 
while the other
disks we're almost idle. After changing to the FSS scheduler all writes we're 
evenly distributed
across the drives.

This would just affect the user processes generating the load, as the zpool 
processes are still
using the SDC scheduler introduced in S10 U9.

I will do some testing on the loadbalance on/off. We have nearline SAS disks, 
which does have dual
path from the disk, however it's still just 7200rpm drives.

Are you using SATA , SAS or SAS-nearline in your array? Do you have multiple 
SAS connections to
your arrays, or do you use a single connection per array only?


Rgds,
Siggi


On Wed, March 2, 2011 18:19, Marion Hakanson wrote:
 sigbj...@nixtra.com said:
 I've played around with turning on and off mpxio on the mpt_sas driver,
 disabling increased the performance from 30MB / sec, but it's still far from 
 the original
 performance. I've attached some dumps of zpool iostat before and after 
 reinstallation.

 I find zpool iostat is less useful in telling what the drives are
 doing than iostat -xn 1.  In particular, the latter will give you an idea 
 of how many operations
 are queued per drive, and how long it's taking the drives to handle those 
 operations, etc.

 On our Solaris-10 systems (U8 and U9), if mpxio is enabled, you really
 want to set loadbalance=none.  The default (round-robin) makes some of our 
 JBOD's (Dell MD1200) go
 really slow for writes.  I see you have tried with mpxio disabled, so your 
 issue may be different.


 You don't say what you're doing to generate your test workload, but there
 are some workloads which will speed up a lot if the ZIL is disabled.  Maybe 
 that or some other
 /etc/system tweaks were in place on the original system.
 Also use format -e and its write_cache commands to see if the drives'
 write caches are enabled or not.

 Regards,


 Marion






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sun T3-2 and ZFS on JBODS

2011-03-02 Thread Sigbjorn Lie
Hi,

We have purchased a new Sun (Oracle) T3-2 machine and 5 shelves of 12 x 2TB 
SAS2 JBOD disks for
our new backup server. Each shelf is connected via a single SAS cable to a 
seperate SAS
controller.

When the system arrived it had Solaris 10 U9 preinstalled. We tested ZFS 
performance and got
roughly 1.3GB / sec when we configured a RADIZ2 per 12 disks, and roughly 1.7GB 
/ sec when we
configured a RAIDZ1 per 6 disks. Good performance.

Then I noticed that some packages we're missing from the installation, and I 
decided to jumpstart
the server to get the server installed the same way as every other Solaris 
server we have. After
it's been reinstalled I get 100-300MB / sec, and very sporadic writes.

I've played around with turning on and off mpxio on the mpt_sas driver, 
disabling increased the
performance from 30MB / sec, but it's still far from the original performance. 
I've attached some
dumps of zpool iostat before and after reinstallation.

Any suggestions to what settings might cause this? What to try to increase the 
performance?


Regards,
Siggi



Before:
pool0   51.5G   109T 93  10.4K   275K  1.27G
pool0   59.8G   109T 92  10.8K   274K  1.32G
pool0   68.1G   109T 92  10.6K   274K  1.30G
pool0   76.5G   109T 92  11.4K   274K  1.39G
pool0   85.0G   109T 92  10.2K   274K  1.25G
pool0   93.6G   109T 92  11.2K   274K  1.37G
pool0   93.6G   109T  0  9.77K  0  1.20G
pool0102G   109T  1  10.6K  5.99K  1.30G
pool0111G   109T  0  11.5K  0  1.41G
pool0119G   109T  1  10.7K  5.99K  1.31G
pool0127G   109T 89  11.1K   268K  1.36G
pool0136G   109T  1  11.8K  5.99K  1.44G



After:
pool0   30.8G   109T  0297  0  36.2M
pool0   30.8G   109T  0  2.85K  0   358M
pool0   33.7G   109T  0760  0  91.5M
pool0   33.7G   109T  0  0  0  0
pool0   33.7G   109T  0  0  0  0
pool0   33.7G   109T  0  0  0  0
pool0   33.7G   109T  0954  0   117M
pool0   33.7G   109T  0  2.03K  0   255M
pool0   36.2G   109T  0358  0  42.2M
pool0   36.2G   109T  0  0  0  0
pool0   36.2G   109T  0858  0   105M
pool0   36.2G   109T  0  1.28K  0   160M
pool0   38.4G   109T  0890  0   107M
pool0   38.4G   109T  0  0  0  0
pool0   38.4G   109T  0  0  0  0
pool0   38.4G   109T  0  1.57K  0   197M
pool0   40.3G   109T  0850  0   103M
pool0   40.3G   109T  0  0  0  0
pool0   40.3G   109T  0  0  0  0
pool0   40.3G   109T  0  0  0  0
pool0   40.3G   109T  0  2.58K  0   320M
pool0   42.2G   109T  0 12  0   102K


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun T3-2 and ZFS on JBODS

2011-03-02 Thread Marion Hakanson
sigbj...@nixtra.com said:
 I've played around with turning on and off mpxio on the mpt_sas driver,
 disabling increased the performance from 30MB / sec, but it's still far from
 the original performance. I've attached some dumps of zpool iostat before and
 after reinstallation. 

I find zpool iostat is less useful in telling what the drives are
doing than iostat -xn 1.  In particular, the latter will give you an
idea of how many operations are queued per drive, and how long it's taking
the drives to handle those operations, etc.

On our Solaris-10 systems (U8 and U9), if mpxio is enabled, you really
want to set loadbalance=none.  The default (round-robin) makes some of
our JBOD's (Dell MD1200) go really slow for writes.  I see you have tried
with mpxio disabled, so your issue may be different.

You don't say what you're doing to generate your test workload, but there
are some workloads which will speed up a lot if the ZIL is disabled.  Maybe
that or some other /etc/system tweaks were in place on the original system.
Also use format -e and its write_cache commands to see if the drives'
write caches are enabled or not.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss