- run iostat -xtc to monitor disks while benchmark is running. svc time
will
show any i/o bottle necks.
Qs. - In iostat -xtc , Any VALUE of svc_t which if occurs can be Taken
as Abnormally High ?

- Try configuring multiple database writers. (Rule of thumb is 1 per
cpu)
Qs. - Since DISK_ASYNC_IO=TRUE , Configuring Multiple db_writers may NOT
be required . The Best practices Doc. also Suggests Against Setting
Multiple db_writers when DISK_ASYNC_IO=TRUE

- Increase your SGA to decrease memory available to the OS. 
Qs. - SGA Can be Increased to what Percent of the Total RAM ? 
Of Now our SGA is 41% of Total RAM
db_block_buffers=700M , shared_pool_size=90M , log_buffers=4M

- Remeber that Oracle db_block_buffers acts as an I/O buffer.
Qs. What Ratio Can this be increased by When Comparing with a NON-QIO
Database ?
We Have Increased to Double Now.


We have Reduced the db_file_direct_io_count to 8 from 64 (Default) as
our Stripe Size is 64K
This is with the Belief that the I/O Buffer Size
(=db_file_direct_io_count*db_block_size) should be LESS or EQUALto the
Stripe Size.

Qs. 

> -----Original Message-----
> From: Greg Connaughton [SMTP:[EMAIL PROTECTED]]
> Sent: Thursday, June 21, 2001 9:05 PM
> To:   VIVEK_SHARMA
> Subject:      Re: Tuning in  VXFS & Quick I/O
> 
> Vivek,
> 
> Could be a lot of reasons Quick I/O isn't buying you much.  You may
> have one
> device maxed out which is causing a bottle neck. Or you may not be
> doing
> enough I/O for quick I/O to buy you much. Where it really comes in
> handy is on
> a 8 - 10 cpu boxy with 8- 10 db writers. Quick I/O will allow multiple
> db
> writers can access the same file simultaneously where they can't as a
> regular
> FS. Also depending on how much memory you are using for your SGA,
> there may be
> plenty of memory for Solaris to cache disk I/O wich may be a short
> term boost.
> I don't know how long your benchmark is running.  You might try the
> following. 
> 
> - run iostat -xtc to monitor disks while benchmark is running. svc
> time will
> show any i/o bottle necks.
> - Try configuring multiple database writers. (Rule of thumb is 1 per
> cpu)
> - Increase your SGA to decrease memory available to the OS. 
> - Remeber that Oracle db_block_buffers acts as an I/O buffer.
> 
> Good Luck!
> 
> -- 
> Greg Connaughton                        Oracle DBA
> ***********
> Ph: 303.865.1243                        USA.Net                  *  *
> *
> Pager: 303.528.6170                     7900 E.Union Ave         *
> *    *
> Fax: 303.865.1205                       Suite 800                *
> *  *
> email: [EMAIL PROTECTED]    Denver, CO 80237-2735
> ***********
> 
> 
> > -----Original Message-----
> > From:       VIVEK_SHARMA [SMTP:[EMAIL PROTECTED]]
> > Sent:       Wednesday, June 20, 2001 6:32 AM
> > To: LazyDBA.com Discussion
> > Subject:    Tuning in  VXFS & Quick I/O 
> > 
> > AIM - Performance Tuning on O.S. & Veritas 
> > 
> > SETUP - 
> > DB Server :-
> > E3500 m/c = 2 CPUs , 2 GB RAM 
> > Solaris 8 on ORA 8.1.7.1
> > Database on VXFS Filesystem with Quick I/O Installed .
> > RAID 0 with 64 K Stripe Size using VXVM & 8 K VXFS Block Size
> > Software RAID Configured as H/W RAID NOT Supported by the A5200
> Storage
> > Box.
> > db_block_size=8 K
 
Benchmarking Run Done on a Banking Application . 
5000 Transactions Distributed Between 200 Concurrent User Processes are
Executed 
Transactions' Mix = OLTP & Batch Runs in the ratio 70:30 percent
respectively


/etc/vfstab :-
/dev/vx/dsk/rootdg/vol01        /dev/vx/rdsk/rootdg/vol01       /in1/db1
vxfs    3       yes     mincache=direct,convosync=direct
/dev/vx/dsk/rootdg/vol02        /dev/vx/rdsk/rootdg/vol02       /in1/db2
vxfs    3       yes     mincache=direct,convosync=direct
/dev/vx/dsk/rootdg/vol03        /dev/vx/rdsk/rootdg/vol03       /in1/db3
vxfs    3       yes     mincache=direct,convosync=direct

/etc/vx/tunefs :-
/dev/vx/dsk/rootdg/vol01 qio_cache_enable=1
/dev/vx/dsk/rootdg/vol02 qio_cache_enable=1
/dev/vx/dsk/rootdg/vol03 qio_cache_enable=1

Qs. What Parameters Can  be additionally Set on the O.S. & Veritas to
Give a performance Benefit

Qs. Should the Tempfiles Also be Converted to Quick I/O (QIO) &
Thereafter De-Sparsed ?

NOTE - After Converting the Existing Database's Files to Quick I/O
(Running mkqio.sh) , NO performance benefit was Observed between the
NON-QIO & QIO Converted Database in the Benchmark Runs .

Upon Setting the parameters mincache=direct,convosync=direct in
/etc/vfstab & qio_cache_enable=1 in /etc/vx/tunefstab Files for the VXFS
Filesyatems Containing the Database , Some Business Types of
Transactions promarily Batch in Nature Improved Distinctly Though the
OLTP Trans, Types Continued to perform at the Same Rate


  

--
Please see the official ORACLE-L FAQ: http://www.orafaq.com
--
Author: VIVEK_SHARMA
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- (858) 538-5051  FAX: (858) 538-5051
San Diego, California        -- Public Internet access / Mailing Lists
--------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to