Thank Richard,

We used netapp as standby database, the write performance looks like good as
the recovery can catch up(might because the write anywhere feature is
optimized to write). But read is very slow, not matter which datafile. But
we mainly care the read performance about one file: the control file,
because we need to read it periodically to know the status of the database.
For other files, they are only needed to be read when we need to copy the
files back to primary database if primary database failed, but in that case
we can stop the recover to reduce the disk contention.

I know there are long queue so the read from disk should be slow, what I
can't understand is that why solaris doesn't cache the data in file system
cache? As show in the post, the free memory is about 25G, if solaris uses
all of them to cache the data, then we don't have so many IO request going
to the netapp filer as it can be got direct from local file system cache.

TEST7-stdby-phxdbnfs11$> uname -a
SunOS phxdbnfs11 5.10 Generic_141414-07 sun4v sparc
SUNW,SPARC-Enterprise-T5120

TEST7-stdby-phxdbnfs11$> cat /etc/vfstab|grep phxdbfiler
phxdbfiler03-A1:/vol/ora11 - /export/home/oracle nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,
intr,rsize=65535,wsize=65535
phxdbfiler03-A1:/vol/com11 - /com nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535
phxdbfiler03-A1:/vol/test01sys - /oracle/TEST1/sys nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535
phxdbfiler04-A1:/vol/test01arc - /oracle/TEST1/archive nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535
phxdbfiler03-A1:/vol/test01data01 - /oracle/TEST1/data01 nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535
phxdbfiler03-A1:/vol/test02sys - /oracle/TEST2/sys nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535
phxdbfiler04-A1:/vol/test02arc - /oracle/TEST2/archive nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535
phxdbfiler03-A1:/vol/test02data01 - /oracle/TEST2/data01 nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535
phxdbfiler03-A1:/vol/test07sys - /oracle/TEST7/sys nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535
phxdbfiler04-A1:/vol/test07arc - /oracle/TEST7/archive nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535
phxdbfiler03-A1:/vol/test07data01 - /oracle/TEST7/data01 nfs 2 yes
rw,bg,vers=3,proto=tcp,hard,intr,rsize=65535,wsize=65535



TEST7-stdby-phxdbnfs11$> sar -d 1 10|grep nfs
SunOS phxdbnfs11 5.10 Generic_141414-07 sun4v    04/11/2010
18:08:30   nfs23             0     0.0       0       0     0.0     0.0
           nfs24           100 1398976.5     234   14946 5978533.5   102.4
           nfs25             0     0.0       1      16     0.0     0.4
           nfs26             0     0.0       1      16     0.0     0.5
           nfs27             0     0.0       1      16     0.0     0.8
           nfs28           100 51371.2     344   22044 149073.0    69.5
           nfs29           100 32211.3     289   18555 111289.8    82.8
           nfs30             0     0.0       1      16     0.0     0.7
           nfs31             0     0.0       0       0     0.0     0.0
           nfs32             0     0.0       1      16     0.0     0.7
           nfs33             0     0.0       1      16     0.0     0.6
18:08:32   nfs23             0     0.0       0       0     0.0     0.0
           nfs24           100 1398762.2     112    7185 12459108.5   213.6
           nfs25             0     0.0       0       0     0.0     0.0
           nfs26             0     0.0       0       0     0.0     0.0
           nfs27             0     0.0       0       0     0.0     0.0
           nfs28           100 51190.7     108    6781 475578.0   222.9
           nfs29           100 32122.8     137    8667 235000.2   175.6
           nfs30             0     0.0       0       0     0.0     0.0
           nfs31             0     0.0       0       0     0.0     0.0
           nfs32             0     0.0       0       0     0.0     0.0
           nfs33             0     0.0       0       0     0.0     0.0
18:08:33   nfs23             0     0.0       0       0     0.0     0.0
           nfs24           100 1398617.3     312   19981 4486553.7    76.8
           nfs25             0     0.0       0       0     0.0     0.0
           nfs26             0     0.0       0       0     0.0     0.0
           nfs27             0     0.0       0       0     0.0     0.0
           nfs28           100 51027.9     129    8268 394811.7   185.6
           nfs29           100 31962.5     170   10649 187460.1   140.7
           nfs30             0     0.0       0       0     0.0     0.0
           nfs31             0     0.0       0       0     0.0     0.0
           nfs32             0     0.0       0       0     0.0     0.0
           nfs33             0     0.0       0       0     0.0     0.0
18:08:34   nfs23             0     0.0       0       0     0.0     0.0
           nfs24           100 1398497.6     191   12217 7325920.0   125.6
           nfs25             0     0.0       1      16     0.0     0.5
           nfs26             0     0.0       1      16     0.0     0.5
           nfs27             0     0.0       1      16     0.0     0.5
           nfs28           100 50931.4     266   17008 191558.5    90.2
           nfs29           100 31875.2     194   12457 164435.0   123.7
           nfs30             0     0.0       1      16     0.0     0.6
           nfs31             0     0.0       0       0     0.0     0.0
           nfs32             0     0.0       1      16     0.0     0.9
           nfs33             0     0.0       1      16     0.0     1.3

Thanks,


On Sun, Apr 11, 2010 at 11:30 PM, Richard Elling
<richard.ell...@gmail.com>wrote:

> On Apr 9, 2010, at 8:18 PM, Daniel, Wu wrote:
> > We have a NFS data volume, reading data from it is so slow(10 minutes to
> copy 512M). But since we have 25G free memory, why solaris 10 doesn't cache
> the data in the file system cache?
>
> It does.
>
> > If the free memory is nearly 0, and the read is bad, that means we reach
> the limit of the system. But since we have so much free memory, the system
> is not optimized, right? What do I need to look into?
>
> iostat will show the latency and queues for I/O to the storage device.
>  -- richard
>
> > $> meminfo
> > RAM  _____Total 65408.0 Mb
> > RAM    Unusable  1425.5 Mb
> > RAM      Kernel  3862.5 Mb
> > RAM      Locked 33728.0 Mb
> > RAM        Used  1174.2 Mb
> > RAM       Avail 25217.9 Mb
> >
> > Disk _____Total 49150.9 Mb
> > Disk      Alloc     0.0 Mb
> > Disk       Free 49150.9 Mb
> >
> > Swap _____Total 102453.1 Mb
> > Swap      Alloc 36413.6 Mb
> > Swap    Unalloc   186.4 Mb
> > Swap      Avail 65853.2 Mb
> > Swap  (MinFree)  8027.9 Mb
> > --
> > This message posted from opensolaris.org
> > _______________________________________________
> > perf-discuss mailing list
> > perf-discuss@opensolaris.org
>
> ZFS storage and performance consulting at http://www.RichardElling.com
> ZFS training on deduplication, NexentaStor, and NAS performance
> Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
>
>
>
>
>
>
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to