Re: [zfs-discuss] ZFS issue on read performance

2011-10-13 Thread degger
Hi,

thanks for your help
I won't be able to install pv on our backup server, as it is in production (and 
we don't have test environment)
how do you use the zfs send command ? what is a snasphot at a ZFS view ?
We use the ZFS volume for storing data only, as a ufs or vxfs volume would do, 
and we don't create snapshot on/from it :
df -h :
Filesystem size used avail capacity Mounted on
zvol01/vls 7.3T 3.4T 4.0T 46% /vls

ll
total 245946205
drwxr-x--- 3 root other 9 Oct 11 14:39 .
drwxr-xr-x 4 root root 4 Jul 27 18:52 ..
-rw-r- 1 root other 20971406848 Sep 29 08:58 TLVLS2C04_32640
-rw-r- 1 root other 20971406848 Jul 12 17:47 TLVLS2C06_7405
-rw-r- 1 root other 20971406848 Jul 13 07:36 TLVLS2C06_7406
-rw-r- 1 root other 20971406848 Jul 13 08:31 TLVLS2C06_7407
-rw-r- 1 root other 20971406848 Jul 13 09:06 TLVLS2C06_7408
-rw-r- 1 root other 20971406848 Jul 13 09:26 TLVLS2C06_7409

- TLVLS2C06_xxx files are virtual cartridges created by our backup software 
(time navigator), containing data from servers backuped up over the network. 
Reading /vls to write those files onto LTO3 tapes (or copying them to another 
volume - an ufs one for test) is slow.

Here is the zpool status command result :
pool: zvol01
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zvol01 ONLINE 0 0 0
c10t600508B40001350A0001804Dd0 ONLINE 0 0 0
c10t600508B40001350A0001805Bd0 ONLINE 0 0 0
c10t600508B40001350A00018050d0 ONLINE 0 0 0
c10t600508B40001350A00018053d0 ONLINE 0 0 0

errors: No known data errors

At this time, no writing is made on the /vls volume as no data are backed up by 
our software, only reading from disk to write on LTO3 tapes
Here are some stats :

root@tlbkup02:/etc# zpool iostat 1
capacity operations bandwidth
pool alloc free read write read write
-- - - - - - -
zvol01 3.35T 4.09T 103 120 12.9M 13.3M
zvol01 3.35T 4.09T 213 0 26.6M 0
zvol01 3.35T 4.09T 181 0 22.6M 0
zvol01 3.35T 4.09T 135 0 16.9M 0
zvol01 3.35T 4.09T 183 0 22.8M 0
zvol01 3.35T 4.09T 204 0 25.5M 0
zvol01 3.35T 4.09T 158 39 19.5M 89.3K
zvol01 3.35T 4.09T 227 0 28.4M 0
zvol01 3.35T 4.09T 264 0 29.5M 0
zvol01 3.35T 4.09T 292 436 33.7M 2.26M
zvol01 3.35T 4.09T 200 0 25.0M 0
zvol01 3.35T 4.09T 193 0 24.0M 0
zvol01 3.35T 4.09T 187 0 23.4M 0
zvol01 3.35T 4.09T 249 0 31.0M 0
zvol01 3.35T 4.09T 240 0 29.9M 0
zvol01 3.35T 4.09T 222 0 27.8M 0
zvol01 3.35T 4.09T 194 0 24.3M 0
zvol01 3.35T 4.09T 236 0 29.4M 0
zvol01 3.35T 4.09T 230 0 28.7M 0
zvol01 3.35T 4.09T 188 0 23.3M 0
zvol01 3.35T 4.09T 249 0 31.1M 0
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS issue on read performance

2011-10-11 Thread degger

Hi,

I'm not familiar with ZFS stuff, so I'll try to give you as much as info I can 
get with our environment
We are using a ZFS pool as a VLS for a backup server (Sun V445 Solaris 10), and 
we are faced with very low read performance (whilst write performance is much 
better, i.e : up to 40GB/h to migrate data onto LTO-3 tape from disk, and up to 
100GB/h to unstage data from LTO-3 tape to disk, either with Time Navigator 4.2 
software or directly using dd commands)
We have tunned ZFS parameters for ARC and disabled preftech but performance is 
poor. If we dd from disk to RAM or tape, it's very slow, but if we dd from tape 
or RAM to disk, it's faster. I can't figure out why. I've read other posts 
related to this, but I'm not sure what can of tuning can be made.
For disks concern, I have no idea on how our System team created the ZFS 
volume. 
Can you help ?

Thank you

David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss