For NetApp, the key thing is the number, and speed (100Mbit or 1Gbit?), of network I/O cards connecting to the NFS server. Mount points are irrelevant as they could all be going over the same I/O channel.
There is a NetApp performance paper that recommends a number of things to tweak for performance, including using multiple IO slaves/DBWRs, rather than asynch_io. Check with your vendor. However, it comes with a disclaimer - do your own tests as it may not apply. I ended up using asynch_io as there was a 10-15% improvement over multiple slaves. In the scheme of things, this is a minor issue. Things that you normally spend a lot of time on like striping and distributing IO are no longer meaningful. After all, you bought a storage server to take care of such things for you. What will kill you are the things you never have to worry about in a DAS configuration. Like: - CPU utilisation will increase significantly, used for shuffling blocks over the network. - test your NFS mount options, especially rsize and wsize. - adjust your OS kernel parameters, including NFS parameters - make sure your OS and especially NFS, patches are up to date - tweak the network interface (ndd command) Some other things that cannot be changed in a hurry: - mount as UDP rather than TCP if you have a dedicated segment for NFS traffic. Trying to share the company-wide network for this is a particularly bad idea. Chances are you will back it out in a hurry. - a later version of OS is much better than an earlier version, eg Solaris 2.8/9 over 2.6. Apparently significant improvements have been made in the TCP/NFS components of the OS. - if you have later version of OS, look at configuring for Ethernet Jumbo Frames. - if the hardware/software is capable, and you have more than 1 network card, look at IP trunking. - if you have older hardware in the DB server, you will run into limits like Sun's SBUS max IO of ~40MB/s. If your requirements are below this, great. Sounds like sysadm type of issues? Yes, it does. If your sysadm (or boss) is good enough to take care of all of these for you, great. In practise, I find that seldom happens. ----- Original Message ----- To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]> Sent: Wednesday, June 04, 2003 1:01 AM > you only want DBWR_IO_SLAVES or multiple DBWRn if you have datafiles spread over multiple I/O points correct? We are using 'Network Appliance' hard disk array that Im not all that familiar with. It looks like we have 3 I/O points and 5 mount points. > > my boss told me that striping data files and redo log files across the I/O points wotn help because there is only 1-2 I/O cards(forget the exact, I hope it isnt hard for anyone to figure out what Im referring to) on the server itself. > > This does not sound accurate. Since Ive read several books and all say to stripe the files? > > btw, thanks for the info on the large pool. I can free up about 300MB of memory we aer wasting on that and the java pool for other areas. -- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Binley Lim INET: [EMAIL PROTECTED] Fat City Network Services -- 858-538-5051 http://www.fatcity.com San Diego, California -- Mailing list and web hosting services --------------------------------------------------------------------- To REMOVE yourself from this mailing list, send an E-Mail message to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
