Typically your stripe size impacts read and write.

In Solaris, the trick is to match it with your maxcontig parameter. If you set maxcontig to 128 pages which is 128* 8 = 1024k (1M) then your optimal stripe size is 128 * 8 / (number of spindles in LUN).. Assuming number of spindles is 6 then you get an odd number. In such cases either your current io or the next sequential io is going to be little bit inefficient depending on what you select (as a rule of thumb however just take the closest stripe size). However if your number of spindles matches 8 then you get a perfect 128 and hence makes sense to select 128K. (Maxcontig is a paramter in Solaris which defines the max contiguous space allocated to a block which really helps in case of sequential io operations).

But as you see this was maxcontig dependent in my case. What if your maxcontig is way off track. This can happen if your io pattern is more and more random. In such cases maxcontig is better at lower numbers to reduce space wastage and in effect reducing your stripe size reduces your responde time.

This means now it is Workload dependent... Random IOs or Sequential IOs (atleast where IOs can be clubbed together).

As you can see stripe size in Solaris is eventually dependent on your Workload. Typically my guess is on any other platform, the stripe size is dependent on your Workload and how it will access the data. Lower stripe size helps smaller IOs perform better but lack total throughtput efficiency. While larger stripe size increases throughput efficiency at the cost of response time of your small IO requirements.

Don't forget many file systems will buffer your IOs and can club them together if it finds them sequential from its point of view. Hence in such cases the effective IO size is what matters for raid sizes.

If you effective IO sizes are big then go for higher raid size.
If your effective IO sizes are small and response time is critical go for smaller raid sizes

Regards,
Jignesh

evgeny gridasov wrote:

Hi Everybody!

I've got a spare machine which is 2xXEON 3.2GHz, 4Gb RAM
14x140Gb SCSI 10k (LSI MegaRaid 320U). It is going into production in 3-5months.
I do have free time to run tests on this machine, and I could test different 
stripe sizes
if somebody prepares a test script and data for that.

I could also test different RAID modes 0,1,5 and 10 for this script.

I guess the community needs these results.

On 16 Sep 2005 04:51:43 -0700
"bm\\mbn" <[EMAIL PROTECTED]> wrote:

Hi Everyone

The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k
disks.

2 disks are in RAID1 and hold the OS, SWAP & pg_xlog
4 disks are in RAID10 and hold the Cluster itself.

the DB will have two major tables 1 with 10 million rows and one with
100 million rows.
All the activities against this tables will be SELECT.

Currently the strip size is 8k. I read in many place this is a poor
setting.

Am i right ?


--
______________________________

Jignesh K. Shah
MTS Software Engineer, MDE - Horizontal Technologies Sun Microsystems, Inc
Phone: (781) 442 3052
Email: [EMAIL PROTECTED]
______________________________



---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to