How many of the variables in /etc/system have been set by JET ? Having md_maxphys over 1MB is of no use since the max transfer size is trimmed down. Actually the sd/ssd drivers have 1MB hard coded to that buffers larger than 1MB are trimmed. I would be interested to know what stripe size you are using and I am concerned that this is not printed out in the output from metastat -p.
What version of Solaris are you running ? -Sanjay Russel wrote: >I have Two 3511 arrays which present luns I then mirror in SVM. >on a pair of T2000 in cluster. >And the re-sync is taking many days and each mirror seems to >do a max IO of about 2000KB/sec > >Any thoughts?: > >In /etc/system I have: > set md_mirror:md_resync_bufsz = 2048 > set md:md_maxphys=8388608 > set maxpgio=1024 > >The Whole of /etc/system is at the end of this post. >The metaset is: >root at jade # metastat -s data2ds -p >Proxy command to: jasper >data2ds/d540 -m data2ds/d541 data2ds/d542 1 >data2ds/d541 1 1 /dev/did/rdsk/d9s3 >data2ds/d542 1 1 /dev/did/rdsk/d14s3 >data2ds/d500 -m data2ds/d501 data2ds/d502 1 >data2ds/d501 1 1 /dev/did/rdsk/d9s0 >data2ds/d502 1 1 /dev/did/rdsk/d14s0 > >The iostat -zx 10 10 shows very little IO: > > extended device statistics >device r/s w/s kr/s kw/s wait actv svc_t %w %b >1/md400 32.9 32.9 2105.6 2105.6 0.0 1.0 15.2 0 100 >1/md401 32.9 0.0 2105.6 0.0 0.0 0.0 1.4 0 5 >1/md402 0.0 32.9 0.0 2105.6 2.0 1.0 89.7 100 95 >2/md540 32.9 32.9 2105.6 2105.6 0.0 1.0 15.2 0 100 >2/md541 32.9 0.0 2105.6 0.0 0.0 0.0 1.4 0 5 >2/md542 0.0 32.9 0.0 2105.6 2.0 1.0 89.7 100 95 >ssd6 0.0 32.9 0.0 2105.6 0.0 1.0 28.9 0 95 >ssd7 32.9 0.0 2105.6 0.0 0.0 0.0 1.4 0 5 >ssd8 0.0 32.9 0.0 2105.6 0.0 1.0 28.9 0 95 >ssd9 32.9 0.0 2105.6 0.0 0.0 0.0 1.4 0 5 > > > > > > > >============================================ >============================================ >set segkmem_lpsize=0x400000 >set ip:dohwcksum=0 >set pcie:pcie_aer_ce_mask=0x1 ># ># For optimum performance of the ipge driver(version 1.25.25): ># added by JetEISCD install > >set autoup=900 >set tune_t_fsflushr=1 >set rlim_fd_max=260000 >set rlim_fd_cur=260000 >set sq_max_size=0 >set ipge:ipge_tx_ring_size=2048 >set ipge:ipge_srv_fifo_depth=16000 >set ipge:ipge_reclaim_pending=32 >set ipge:ipge_bcopy_thresh=512 >set ipge:ipge_dvma_thresh=1 >set ip:ip_squeue_fanout=1 > >* START OF CLUSTER CONFIGURATION >set maxpgio=1024 >set maxphys=8388608 >set fp:fp_offline_ticker=5 >set md:md_maxphys=8388608 >set ipge:ipge_tx_syncq=1 >* END OF CLUSTER CONFIGURATION > >* Begin MDD root info (do not edit) >rootdev:/pseudo/md at 0:0,100,blk >* End MDD root info (do not edit) >* Start of lines added by SUNWscr >exclude: lofs >set rpcmod:svc_default_stksize=0x6000 >set ge:ge_intr_mode=0x833 >* Disable task queues and send all packets up to Layer 3 >* in interrupt context. >* Uncomment line below if using ce interface as a SUN Cluster >* private interconnect. Be advised this will affect all ce >* instances. For more info on performance tuning see: >* http://www.sun.com/blueprints/0404/817-6925.pdf >* set ce:ce_taskq_disable=1 >* End of lines added by SUNWscr > > >* N1 SPS settings >set shmsys:shminfo_shmmax=536870912 >set semsys:seminfo_semmni=32 >set semsys:seminfo_semmns=512 >set md_mirror:md_resync_bufsz = 2048 > >============================================ >============================================ > > >This message posted from opensolaris.org >_______________________________________________ >lvm-discuss mailing list >lvm-discuss at opensolaris.org > >