Hi Bjorn

I agree with you regarding caching - I was unsure as to wether Unidata does its own file caching (as ORACLE does), in which case you could then mount the Unidata volume as DIRECTIO but with it not - its not an option. The problem is that any Unidata operation in terms of major disk access seems to slaughter io, it could be the most efficient program and if it is allowed to run freely (with no delays) you get massive average service times (usually up around a second which is totally un-acceptable for one person) and very poor write rates (under 20MB/s).

A couple of things I have thought about is playing around with the system file caching in terms of the UFS Write throttle (sd_max_throttle - to limit the queue) and the UFS high/low water marks to see if I can pull down the service times. The read/write speed is not really an issue to me as long as its consistant and at an acceptable limit, but the biggest thing to me is the average service times, as this causes headaches for everyone else.

With a 1 second delay every 100 records in the mentioned program (code attached) the average service times are normal and you dont notice any problems with server response times.
With a 1 second delay every 1000 records you start to notice a slight deterioration in the system.
Running freely, you notice a major problem, service time is around a second.


It is as though the buffers overflow to the point that they are clogged and any disk operations by other processes while this type of program is running, suffer greatly due to the high service times.

I'm going to try researching this, but wondered if anybody had any further information for me? I will post my results if anyone is interested.

Bj�rn Eklund wrote:

Hi Martin,
we have equipment that looks a lot like yours, same server but with double
the amount of CPU and RAM.
We also have an external SAN storage(FC disks 15000 rpm) where all the
unidata files resides. When we started the system for the first time everything we tried to do was
very slow. After tuning the storage kabinett's cache we got an acceptable
performance.


After some time we started looking for other ways of improving performance
and did a resize on all our files. The biggest change was from blocksizze 2
to 4 on almost every file. This made an improvement of about 50-100%
perfomance on our disk intense batchprograms.
I don't remeber any figures on speed regarding reads and writes but I can
ask our unixadmin to dig them up if you want.

It's just a guess but I do belive that Unidata rely heavily on Solaris
buffers.

Regards
Bj�rn

-----Ursprungligt meddelande-----
Fr�n: Martin Thorpe [mailto:[EMAIL PROTECTED]
Skickat: den 25 februari 2004 19:13
Till: [EMAIL PROTECTED]
�mne: Performance Discussion - Unidata


Hi guys


Hope everybody is ok!

To get straight to the point, system as follows:

SunFire V880
2x1.2GHZ UltaSparc3cu Processors
4GB RAM
6x68GB 10krpm FC-AL disks
96GB backplane

Disks are grouped together to create volumes - as follows:

Disk 1 - root, var, dev, ud60, xfer - RAID 1 (Root Volume Primary Mirror)
Disk 2 - root, var, dev, ud60, xfer - RAID 1 (Root Volume Submirror)
Disk 3 - /u - RAID 10 (Unidata Volume Primary Mirror - striped)
Disk 4 - /u - RAID 10 (Unidata Volume Primary Mirror - striped)
Disk 5 - /u - RAID 10 (Unidata Volume Submirror - striped)
Disk 6 - /u - RAID 10 (Unidata Volume Sumkfs -F ufs -o nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 /dev/md/dsk/d10 286220352
bmirror - striped)


UD60   -   Unidata Binary area
XFER   -   Data output area for Unidata accounts (csv files etc)
/U         -   Primary Unidata account/database area.

If I perform tests via the system using both dd and mkfile, I see speeds of around 50MB/s for WRITES, 60MB/s for READS, however if a colleague loads a 100MB csv file using READSEQ into a Unidata file, not doing anything fancy, I see massive Average Service Times (asvc_t - using IOSTAT) and the device is usually always 100% busy, no real CPU overhead but with 15MB/s tops WRITE. There is only ONE person using this system (to test throughput).

This is confusing, drilling down I have set a 16384 block interlace size on each stripe and the following info for the mounted volume:

mkfs -F ufs -o nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 /dev/md/dsk/d10 286220352


in /etc/system I have set the following parameters:

set shmsys:shminfo_shmmni=1024
set shmsys:shminfo_shmmax=8388608
set shmsys:shminfo_shmseg=50
set msgsys:msginfo_msgmni=1615
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=985
set semsys:seminfo_semmnu=1218

set maxpgio=240
set maxphys=8388608

I have yet to change the throughput on the ssd drivers in order to break the 1MB barrier, however I still would have expected better performance. UDTCONFIG is as yet unchanged from default.

Does anybody have any comments?

Things to try in my opinion:

I think I have the RAID correct, the Unidata TEMP directory I have redirected to be on the /U RAID 10 partition rather than the RAID 1 ud60 area.

1. Blocksizes should match average Unidata file size.

One question I have is does Unidata perform its own file caching? can I mount filesystems using FORCEDIRECTIO or does Unidata rely heavily on the Solaris based buffers?

Thanks for any information you can provide




-- Martin Thorpe DATAFORCE GROUP LTD DDI: 01604 673886 MOBILE: 07740598932 WEB: http://www.dataforce.co.uk mailto: [EMAIL PROTECTED]

--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

Reply via email to