Hope everybody is ok!
To get straight to the point, system as follows:
SunFire V880 2x1.2GHZ UltaSparc3cu Processors 4GB RAM 6x68GB 10krpm FC-AL disks 96GB backplane
Disks are grouped together to create volumes - as follows:
Disk 1 - root, var, dev, ud60, xfer - RAID 1 (Root Volume Primary Mirror)
Disk 2 - root, var, dev, ud60, xfer - RAID 1 (Root Volume Submirror)
Disk 3 - /u - RAID 10 (Unidata Volume Primary Mirror - striped)
Disk 4 - /u - RAID 10 (Unidata Volume Primary Mirror - striped)
Disk 5 - /u - RAID 10 (Unidata Volume Submirror - striped)
Disk 6 - /u - RAID 10 (Unidata Volume Sumkfs -F ufs -o nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 /dev/md/dsk/d10 286220352
bmirror - striped)
UD60 - Unidata Binary area XFER - Data output area for Unidata accounts (csv files etc) /U - Primary Unidata account/database area.
If I perform tests via the system using both dd and mkfile, I see speeds of around 50MB/s for WRITES, 60MB/s for READS, however if a colleague loads a 100MB csv file using READSEQ into a Unidata file, not doing anything fancy, I see massive Average Service Times (asvc_t - using IOSTAT) and the device is usually always 100% busy, no real CPU overhead but with 15MB/s tops WRITE. There is only ONE person using this system (to test throughput).
This is confusing, drilling down I have set a 16384 block interlace size on each stripe and the following info for the mounted volume:
mkfs -F ufs -o nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 /dev/md/dsk/d10 286220352
in /etc/system I have set the following parameters:
set shmsys:shminfo_shmmni=1024 set shmsys:shminfo_shmmax=8388608 set shmsys:shminfo_shmseg=50 set msgsys:msginfo_msgmni=1615 set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=985 set semsys:seminfo_semmnu=1218
set maxpgio=240 set maxphys=8388608
I have yet to change the throughput on the ssd drivers in order to break the 1MB barrier, however I still would have expected better performance. UDTCONFIG is as yet unchanged from default.
Does anybody have any comments?
Things to try in my opinion:
I think I have the RAID correct, the Unidata TEMP directory I have redirected to be on the /U RAID 10 partition rather than the RAID 1 ud60 area.
1. Blocksizes should match average Unidata file size.
One question I have is does Unidata perform its own file caching? can I mount filesystems using FORCEDIRECTIO or does Unidata rely heavily on the Solaris based buffers?
Thanks for any information you can provide
-- Martin Thorpe DATAFORCE GROUP LTD DDI: 01604 673886 MOBILE: 07740598932 WEB: http://www.dataforce.co.uk mailto: [EMAIL PROTECTED]
-- u2-users mailing list [EMAIL PROTECTED] http://www.oliver.com/mailman/listinfo/u2-users
