I have a couple of performance questions.

Right now, I am transferring about 200GB of data via NFS to my new Solaris
server. I started this YESTERDAY. When writing to my ZFS pool via NFS, I
notice what I believe to be slow write speeds. My client hosts vary between
a MacBook Pro running Tiger to a FreeBSD 6.2 Intel server. All clients are
connected to the a 10/100/1000 switch.

* Is there anything I can tune on my server?
* Is the problem with NFS?
* Do I need to provide any other information?


PERFORMANCE NUMBERS:

(The file transfer is still going on)

bash-3.00# zpool iostat 5
              capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank         140G  1.50T     13     91  1.45M  2.60M
tank         140G  1.50T      0     89      0  1.42M
tank         140G  1.50T      0     89  1.40K  1.40M
tank         140G  1.50T      0     94      0  1.46M
tank         140G  1.50T      0     85  1.50K  1.35M
tank         140G  1.50T      0    101      0  1.47M
tank         140G  1.50T      0     90      0  1.35M
tank         140G  1.50T      0     84      0  1.37M
tank         140G  1.50T      0     90      0  1.39M
tank         140G  1.50T      0     90      0  1.43M
tank         140G  1.50T      0     91      0  1.40M
tank         140G  1.50T      0     91      0  1.43M
tank         140G  1.50T      0     90  1.60K  1.39M

bash-3.00# zpool iostat -v
              capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank         141G  1.50T     13     91  1.45M  2.59M
 raidz1    70.3G   768G      6     45   793K  1.30M
   c3d0        -      -      3     43   357K   721K
   c4d0        -      -      3     42   404K   665K
   c6d0        -      -      3     43   404K   665K
 raidz1    70.2G   768G      6     45   692K  1.30M
   c3d1        -      -      3     42   354K   665K
   c4d1        -      -      3     42   354K   665K
   c5d0        -      -      3     43   354K   665K
----------  -----  -----  -----  -----  -----  -----

I also decided to time a local filesystem write test:

bash-3.00# time dd if=/dev/zero of=/data/testfile bs=1024k count=1000
1000+0 records in
1000+0 records out

real    0m16.490s
user    0m0.012s
sys     0m2.547s


SERVER INFORMATION:

Solaris 10 U3
Intel Pentium 4 3.0GHz
2GB RAM
Intel NIC (e1000g0)
1x 80 GB ATA drive for OS -
6x 300GB SATA drives for /data
 c3d0 - Sil3112 PCI SATA card port 1
 c3d1 - Sil3112 PCI SATA card port 2
 c4d0 - Sil3112 PCI SATA card port 3
 c4d1 - Sil3112 PCI SATA card port 4
 c5d0 - Onboard Intel SATA
 c6d0 - Onboard Intel SATA


DISK INFORMATION:

bash-3.00# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
      0. c1d0 <DEFAULT cyl 9961 alt 2 hd 255 sec 63>
         /[EMAIL PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
      1. c3d0 <Maxtor 6-XXXXXXX-0001-279.48GB>
         /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
      2. c3d1 <Maxtor 6-XXXXXXX-0001-279.48GB>
         /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL PROTECTED] 
/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
      3. c4d0 <Maxtor 6-XXXXXXX-0001-279.48GB>
         /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
      4. c4d1 <Maxtor 6-XXXXXXX-0001-279.48GB>
         /[EMAIL PROTECTED],0/pci8086, [EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
      5. c5d0 <Maxtor 6-XXXXXXX-0001-279.48GB>
         /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
      6. c6d0 <Maxtor 6-XXXXXXX-0001-279.48GB>
         /[EMAIL PROTECTED],0/[EMAIL PROTECTED] ,2/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
Specify disk (enter its number): ^C
(XXXXXXX = drive serial number)


ZPOOL CONFIGURATION:

bash-3.00# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
tank                   1.64T    140G   1.50T     8%  ONLINE     -

bash-3.00# zpool status
 pool: tank
state: ONLINE
scrub: scrub completed with 0 errors on Tue Jun 19 07:33:05 2007
config:

       NAME        STATE     READ WRITE CKSUM
       tank        ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c3d0    ONLINE       0     0     0
           c4d0    ONLINE       0     0     0
           c6d0    ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c3d1    ONLINE       0     0     0
           c4d1    ONLINE       0     0     0
           c5d0    ONLINE       0     0     0

errors: No known data errors


ZFS Configuration:

bash-3.00# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
tank                  93.3G  1006G  32.6K  /tank
tank/data             93.3G  1006G  93.3G  /data
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to