Your point Taken . Will Check with Higher Stripe Size Whenever possible
.
NOT Using Parallel Query Option (PQO) in the Application
Thanks again
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [SMTP:[EMAIL PROTECTED]]
> Sent: Thursday, June 21, 2001 4:41 AM
> To: VIVEK_SHARMA
> Subject: RE: Tuning in VXFS & Quick I/O
>
> I'm running Veritas but not Quick I/O (420R 4 CPUs, 4GB RAM, A-1000
> array
> with HW RAID 10). As I understand, Quick I/O tunes like 'RAW', so the
> main
> advantage is for write intensive files such as REDO logs and TEMP
> tablespace.
>
> I think also you may be able to benefit from a block size larger than
> 8K.
> Though likely this will disproportionately benefit batch/analytic
> reports
> and have little effect on true OLTP transactions.
>
> Personally, I think 64K is a small stripe size. Fine grain striping
> improves single user/light load performance at some expense in
> scalability.
> To avoid taking a hit under heavier load, I like to start with
> something a
> little higher like 256K or 512K, which doesn't perform much better at
> light
> load than at moderate load so users see nearly the same response times
> regardless of system load variations.
>
> Are you using parallel query option?
>
> Kevin
>
> > -----Original Message-----
> > From: VIVEK_SHARMA [SMTP:[EMAIL PROTECTED]]
> > Sent: Wednesday, June 20, 2001 6:32 AM
> > To: LazyDBA.com Discussion
> > Subject: Tuning in VXFS & Quick I/O
> >
> > AIM - Performance Tuning on O.S. & Veritas
> >
> > SETUP -
> > DB Server :-
> > E3500 m/c = 2 CPUs , 2 GB RAM
> > Solaris 8 on ORA 8.1.7.1
> > Database on VXFS Filesystem with Quick I/O Installed .
> > RAID 0 with 64 K Stripe Size using VXVM & 8 K VXFS Block Size
> > Software RAID Configured as H/W RAID NOT Supported by the A5200
> Storage
> > Box.
> > db_block_size=8 K
Benchmarking Run Done on a Banking Application .
5000 Transactions Distributed Between 200 Concurrent User Processes are
Executed
Transactions' Mix = OLTP & Batch Runs in the ratio 70:30 percent
respectively
/etc/vfstab :-
/dev/vx/dsk/rootdg/vol01 /dev/vx/rdsk/rootdg/vol01 /in1/db1
vxfs 3 yes mincache=direct,convosync=direct
/dev/vx/dsk/rootdg/vol02 /dev/vx/rdsk/rootdg/vol02 /in1/db2
vxfs 3 yes mincache=direct,convosync=direct
/dev/vx/dsk/rootdg/vol03 /dev/vx/rdsk/rootdg/vol03 /in1/db3
vxfs 3 yes mincache=direct,convosync=direct
/etc/vx/tunefs :-
/dev/vx/dsk/rootdg/vol01 qio_cache_enable=1
/dev/vx/dsk/rootdg/vol02 qio_cache_enable=1
/dev/vx/dsk/rootdg/vol03 qio_cache_enable=1
Qs. What Parameters Can be additionally Set on the O.S. & Veritas to
Give a performance Benefit
Qs. Should the Tempfiles Also be Converted to Quick I/O (QIO) &
Thereafter De-Sparsed ?
NOTE - After Converting the Existing Database's Files to Quick I/O
(Running mkqio.sh) , NO performance benefit was Observed between the
NON-QIO & QIO Converted Database in the Benchmark Runs .
Upon Setting the parameters mincache=direct,convosync=direct in
/etc/vfstab & qio_cache_enable=1 in /etc/vx/tunefstab Files for the VXFS
Filesyatems Containing the Database , Some Business Types of
Transactions promarily Batch in Nature Improved Distinctly Though the
OLTP Trans, Types Continued to perform at the Same Rate
>
--
Please see the official ORACLE-L FAQ: http://www.orafaq.com
--
Author: VIVEK_SHARMA
INET: [EMAIL PROTECTED]
Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051
San Diego, California -- Public Internet access / Mailing Lists
--------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from). You may
also send the HELP command for other information (like subscribing).