I'm experiencing a bizzare write performance problem while using a ZFS 
filesystem. Here are the relevant facts:

[b]# zpool list[/b]
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
mtdc                   3.27T    502G   2.78T    14%  ONLINE     -
zfspool                68.5G   30.8G   37.7G    44%  ONLINE     -

[b]# zfs list[/b]
NAME                   USED  AVAIL  REFER  MOUNTPOINT
mtdc                   503G  2.73T  24.5K  /mtdc
mtdc/sasmeta           397M   627M   397M  /sasmeta
mtdc/u001             30.5G   226G  30.5G  /u001
mtdc/u002             29.5G   227G  29.5G  /u002
mtdc/u003             29.5G   226G  29.5G  /u003
mtdc/u004             28.4G   228G  28.4G  /u004
mtdc/u005             28.3G   228G  28.3G  /u005
mtdc/u006             29.8G   226G  29.8G  /u006
mtdc/u007             30.1G   226G  30.1G  /u007
mtdc/u008             30.6G   225G  30.6G  /u008
mtdc/u099              266G   502G   266G  /u099
zfspool               30.8G  36.6G  24.5K  /zfspool
zfspool/apps          30.8G  33.2G  28.5G  /apps
zfspool/[EMAIL PROTECTED]  2.28G      -  29.8G  -
zfspool/home          15.4M  2.98G  15.4M  /home

[b]# zfs list mtdc/u099[/b]
NAME             PROPERTY       VALUE                      SOURCE
mtdc/u099        type           filesystem                 -
mtdc/u099        creation       Thu Aug 17 10:21 2006      -
mtdc/u099        used           267G                       -
mtdc/u099        available      501G                       -
mtdc/u099        referenced     267G                       -
mtdc/u099        compressratio  3.10x                      -
mtdc/u099        mounted        yes                        -
mtdc/u099        quota          768G                       local
mtdc/u099        reservation    none                       default
mtdc/u099        recordsize     128K                       default
mtdc/u099        mountpoint     /u099                      local
mtdc/u099        sharenfs       off                        default
mtdc/u099        checksum       on                         default
mtdc/u099        compression    on                         local
mtdc/u099        atime          off                        local
mtdc/u099        devices        on                         default
mtdc/u099        exec           on                         default
mtdc/u099        setuid         on                         default
mtdc/u099        readonly       off                        default
mtdc/u099        zoned          off                        default
mtdc/u099        snapdir        hidden                     default
mtdc/u099        aclmode        groupmask                  default
mtdc/u099        aclinherit     secure                     default

[b]No error messages listed by zpool or /var/opt/messages.[/b] When I try to 
save a file the operation takes an inordinate amount of time, in the 30+ second 
range!!! I truss'd the vi session to see the hangup and it waits at the write 
system call.

# truss -p <pid>
read(0, 0xFFBFD0AF, 1)          (sleeping...)
read(0, " w", 1)                                = 1
write(1, " w", 1)                               = 1
read(0, " q", 1)                                = 1
write(1, " q", 1)                               = 1
read(0, 0xFFBFD00F, 1)          (sleeping...)
read(0, "\r", 1)                                = 1
ioctl(0, I_STR, 0x000579F8)                     Err#22 EINVAL
write(1, "\r", 1)                               = 1
write(1, " " d e l e t e m e "", 10)            = 10
stat64("deleteme", 0xFFBFCFA0)                  = 0
creat("deleteme", 0666)                         = 4
ioctl(2, TCSETSW, 0x00060C10)                   = 0
[b]write(4, " l f f j d\n", 6)                     = 6[/b] <---- still waiting 
while I type this message!!

This problem manifests itself only on this filesystem and not on the other ZFS 
filesystems on the same server built from the same ZFS pool. While I was 
awaiting completion of the above write I was able to start a new vi session in 
another window and saved a file to the /u001 filesystem without any problem. 
System loads are very low. Can anybody comment on this bizzare behavior?
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to