Hi Martin,
excuse the late answer, we have a StorageTek D280 with 1GB cache and we see
good performance after tuning the cache.

Björn Eklund

-----Ursprungligt meddelande-----
Från: Martin Thorpe [mailto:[EMAIL PROTECTED]
Skickat: den 26 februari 2004 13:21
Till: U2 Users Discussion List
Ämne: Re: SV: Performance Discussion - Unidata


Bjorn

If you didnt mind me asking, what hardware are you using in terms of 
SAN, is it EMC Clarion, Storagetek/Sun StoreEdge etc and also how much 
cache have you got on those arrays? 1GB?

Do you see good performance over that?

Thanks

Björn Eklund wrote:

>Hi Martin,
>we have equipment that looks a lot like yours, same server but with double
>the amount of CPU and RAM.
>We also have an external SAN storage(FC disks 15000 rpm) where all the
>unidata files resides. 
>When we started the system for the first time everything we tried to do was
>very slow. After tuning the storage kabinett's cache we got an acceptable
>performance.
>
>After some time we started looking for other ways of improving performance
>and did a resize on all our files. The biggest change was from blocksizze 2
>to 4 on almost every file. This made an improvement of about 50-100%
>perfomance on our disk intense batchprograms.
>I don't remeber any figures on speed regarding reads and writes but I can
>ask our unixadmin to dig them up if you want.
>
>It's just a guess but I do belive that Unidata rely heavily on Solaris
>buffers.
>
>Regards
>Björn
>
>-----Ursprungligt meddelande-----
>Från: Martin Thorpe [mailto:[EMAIL PROTECTED]
>Skickat: den 25 februari 2004 19:13
>Till: [EMAIL PROTECTED]
>Ämne: Performance Discussion - Unidata
>
>
>Hi guys
>
>Hope everybody is ok!
>
>To get straight to the point, system as follows:
>
>SunFire V880
>2x1.2GHZ UltaSparc3cu Processors
>4GB RAM
>6x68GB 10krpm FC-AL disks
>96GB backplane
>
>Disks are grouped together to create volumes - as follows:
>
>Disk 1   -   root, var, dev, ud60, xfer     -   RAID 1  (Root Volume 
>Primary Mirror)
>Disk 2   -   root, var, dev, ud60, xfer     -   RAID 1  (Root Volume 
>Submirror)
>Disk 3   -   /u                                        -   RAID 10 
>(Unidata Volume Primary Mirror - striped)
>Disk 4   -   /u                                        -   RAID 10 
>(Unidata Volume Primary Mirror - striped)
>Disk 5   -   /u                                        -   RAID 10 
>(Unidata Volume Submirror - striped)
>Disk 6   -   /u                                        -   RAID 10 
>(Unidata Volume Sumkfs -F ufs -o 
>nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=
8
>275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
>/dev/md/dsk/d10 286220352
>bmirror - striped)
>
>UD60   -   Unidata Binary area
>XFER   -   Data output area for Unidata accounts (csv files etc)
>/U         -   Primary Unidata account/database area.
>
>If I perform tests via the system using both dd and mkfile, I see speeds 
>of around 50MB/s for WRITES, 60MB/s for READS, however if a colleague 
>loads a 100MB csv file using READSEQ into a Unidata file, not doing 
>anything fancy, I see massive Average Service Times (asvc_t - using 
>IOSTAT) and the device is usually always 100% busy, no real CPU overhead 
>but with 15MB/s tops WRITE. There is only ONE person using this system 
>(to test throughput).
>
>This is confusing, drilling down I have set a 16384 block interlace size 
>on each stripe and the following info for the mounted volume:
>
>mkfs -F ufs -o 
>nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=
8
>275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
>/dev/md/dsk/d10 286220352
>
>in /etc/system I have set the following parameters:
>
>set shmsys:shminfo_shmmni=1024
>set shmsys:shminfo_shmmax=8388608
>set shmsys:shminfo_shmseg=50
>set msgsys:msginfo_msgmni=1615
>set semsys:seminfo_semmni=100
>set semsys:seminfo_semmns=985
>set semsys:seminfo_semmnu=1218
>
>set maxpgio=240
>set maxphys=8388608
>
>I have yet to change the throughput on the ssd drivers in order to break 
>the 1MB barrier, however I still would have expected better performance. 
>UDTCONFIG is as yet unchanged from default.
>
>Does anybody have any comments?
>
>Things to try in my opinion:
>
>I think I have the RAID correct, the Unidata TEMP directory I have 
>redirected to be on the /U RAID 10 partition rather than the RAID 1 ud60 
>area.
>
>1. Blocksizes should match average Unidata file size.
>
>One question I have is does Unidata perform its own file caching? can I 
>mount filesystems using FORCEDIRECTIO or does Unidata rely heavily on 
>the Solaris based buffers?
>
>Thanks for any information you can provide
>
>  
>

-- 
Martin Thorpe
DATAFORCE GROUP LTD
DDI: 01604 673886
MOBILE: 07740598932
WEB: http://www.dataforce.co.uk
mailto: [EMAIL PROTECTED]

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

Reply via email to