Re: [zfs-discuss] ZFS performance falls off a cliff

2011-05-13 Thread Don
~# uname -a
SunOS nas01a 5.11 oi_147 i86pc i386 i86pc Solaris

~# zfs get version pool0
NAME   PROPERTY  VALUESOURCE
pool0  version   5-

~# zpool get version pool0
NAME   PROPERTY  VALUESOURCE
pool0  version   28   default
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Extremely slow zpool scrub performance

2011-05-13 Thread Donald Stahl
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).

Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.

Load on the box is .59.

8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.

Known hardware errors:
- 1 of 8 SAS lanes is down- though we've seen the same poor
performance when using the backup where all 8 lanes work.
- Target 44 occasionally throws an error (less than once a week). When
this happens the pool will become unresponsive for a second, then
continue working normally.

Read performance when we read off the file system (including cache and
using dd with a 1meg block size) shows 1.6GB/sec. zpool iostat will
show numerous reads of 500 MB/s when doing this test.

I'm willing to consider that hardware could be the culprit here- but I
would expect to see signs if that were the case. The lack of any slow
service times, the lack of any effort at disk IO all seem to point
elsewhere.

I will provide any additional information people might find helpful
and will, if possible, test any suggestions.

Thanks in advance,
-Don
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] sun (oracle) 7110 zfs low performace fith high latency and high disc util.

2011-05-13 Thread Denes Dolhay
Hello!
Our company have 2 sun 7110 with the following configuration:

Primary:
7110 with 2 qc 1.9ghz HE opterons and 32GB ram
16 2.5" 10Krpm sas disc (2 system, 1 spare)
a pool is configured from the rest so we have 13 active working discs in 
raidz-2 (called main)
there is a sun J4200 jbod connected to this device with 12x750GB discs
with 1 spare and 11active discs there is another pool configured (called JBOD)

Backup:
7110 (converted from x4240) with 2 qc 1.9ghz HE opterons and 8GB ram
16 2.5" 10Krpm sas disc (2 system, 1 spare)
a pool is configured from the rest so we have 13 active working discs in 
raidz-2 (called main)
there is a promise wess jbod connected to this device with 12x1T discs
with 1 spare and 11active discs there is another pool configured (called JBOD)

The two storages are connected with periodic (1 hour) replication
All the discs aand other hardware are working properely.
Zil is turned off, system is set to assync.
the firmware version is fishworks 2010.Q3.2.0.
The purpose of the system is to proveide NFSv3 shares for our mini-cloud
The system is mission critical.

Our problem is, that both devices experience low performace, and high latency 
(up to 1.5sec on some operations)
due to heavy cacheing the main storage-s full input+output bandwitch is about 
5MB/sec with ~2000 op/sec from NFSv3 (1950 metadata cache hit/sec, 350 data 
hit/sec 50 data miss/sec)

The very strange thing is:
we have very high rates at disc percent of utilization (every disc) due to 
~3200 iops in the discs (60 iops / data disc) with 0 size

if we initiate a sequencial read or write from one of the nfs clients we get 
8-15MB/sec performacce from the system.

I'd like to know why it is doing that, how can an iops be 0 length, what can we 
do about it?
Thank you for any help, we really need to solve this.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance falls off a cliff

2011-05-13 Thread Aleksandr Levchuk
sirket, could you please share your OS, zfs, and zpool versions?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss