RE: SAME and separating disk and index tablespaces

2003-10-09 Thread Hans de Git
Dave,

during a 'db file sequential read', an index is _not_ accessed sequentially.
An index is not a sequential structure, so reading from an index in order 
will cause multiple seeks on the index itself. And we're talking single user 
here

regards,
Hans
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Date: Wed, 08 Oct 2003 16:29:25 -0800
Great responses ! Thanks very much ..

-Original Message-
Dave Hau
Sent: Wednesday, October 08, 2003 3:19 PM
To: Multiple recipients of list ORACLE-L
Hi Gaja,

I agree that throughput can always be improved by adding more drives to
the striped array.  However, this does not improve access time.  If you
have your tables and indexes on the same striped array, necessarily the
two I/O's have to be done sequentially, incurring two times access time
at a minimum.  However, if you separate the two into different arrays,
then you can access them in parallel, starting to get data from each
disk array in 1* access time.  This makes sense esp. in scenarios where
response time is more important than throughput, and also in use cases
where your access pattern is random rather than sequential.
So I feel that there's a tradeoff between access time and throughput.
If you have ten drives, and you stripe all of them into a single array
and put both your data and indexes onto this array, you get maximum
throughput but you're sacrificing access time for throughput.  However,
if you build two arrays each consisting of five drives, and put your
data and indexes onto each array, you get half of the previous
throughput, but you get better access time because now your data and
index access can be truly in parallel.
Regards,
Dave




[EMAIL PROTECTED] wrote:
 Hi Hans/Vikas,

 I tend to agree that the old draconian rule that thou
 shalt always separate indexes from tables may not
 apply any more. We used to apply that principle in the
 past when the number of available spindles was not
 adequate. Seems like with 256G drives in the market,
 we are being pushed back in time, in some way!!!

 The way I look at the problem is purely from an IOPS
 perspective. For example, if each physical disk is
 capable of 256 IOPS (ignore the cache configured here)
 and you have 10 disks in your volume, then the total
 I/O capacity on this volume is 2560 IOPS. Separation
 of objects across multiple volumes may becomes an
 issue, only when the demand for I/O outstrips the
 supply (in this case 2560 IOPS).

 Even then, you can always add more drives to the
 existing volume and restripe, i.e., adding 5 more
 drives to 10 drives increases the I/O capacity by 50%.
 At the end of the day, the I/O sub-system does not
 care, whether it is servicing a data segment, index
 segment or undo segment.

 But, in certain environments, that I have dealt with,
 there has been a need to separate heavily and
 concurrently accessed objects (does not matter whether
 these objects are all indexes or tables or both). This
 may be true only for certain objects and certain
 queries. So, please don't apply this in a blanket
 fashion.

 Empirical data is always the best justification
 mechnism for a configuration exercise such as this.
 Plus, you may have partitioning and other requirements
 such as parallelism that impact the placement and
 availability of your data. This in turn will control
 the number of logical volumes that need to be created.

 I think the idea and philosophy behind SAME is noble -
 Use all available drives, so that you do not have
 localized hot-spots. But the implementation of SAME
 and how many volumes you need in your enviroment, is a
 function of your custom needs based on your system and
 application demands. When you over-simplify something,
 you lose the flexibility. The art factor here (which
 requires some planning) is in achieving a balance
 between simplicity, flexibility, performance,
 manageability and availability.


 Hope that helps,


 Gaja
 --- Hans de Git [EMAIL PROTECTED] wrote:

Vikas,

Spend an hour on reading this usenet thread:




http://groups.google.nl/groups?hl=nllr=ie=UTF-8oe=UTF-8threadm=brjz8
.15%24707.245%40news.oracle.comrnum=1prev=/groups%3Fhl%3Dnl%26lr%3D%26
ie%3DUTF-8%26oe%3DUTF-8%26q%3Drogers%2Bseparate%2Bdata%2Bindex

It will open your eyes about separating data/index.

Still not sure about the redolog stream...Because of
the sequential nature
of redologfiles. I've read  tests that 'prove' it
doesn't matter much
whether you separate your redolog from 'ordinary'
datafiles or not. It does
simplify things when you pure SAME.

Regards,
Hans


Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L
[EMAIL PROTECTED]
Date: Wed, 08 Oct 2003 09:54:30 -0800

Thanks Gaja !  Does it also make sense from a
performance perspective
(I/O issues due to concurrent access of index and
data ) to separate
them or is that point moot once you apply the SAME
methodology ?

-Original Message-
Gaja Krishna Vaidyanatha
Sent: Wednesday, 

Re: SAME and separating disk and index tablespaces

2003-10-09 Thread David Hau
Hans,

Your statement is true except in the case of a fast full-index scan. 
But that's not my point.  What I'm trying to say is:

1.  In scenarios where response time is important, for example when you 
want to obtain the first n rows of a query result as quickly as 
possible, then access time may be as important as throughput.

2.  Adding disks to a striped array only improves throughput, not access 
time.

3.  Access time can be improved by parallel I/O execution on separate 
disk arrays.

I'm trying to point out the difference between:

1.  striping 10 disks into a single array
2.  striping 10 disks into two arrays of 5 disks each
In the first case, you get max throughput but because you only have one 
array, you cannot improve access time by parallelizing disk access.

In the second case, you get half the throughput of the first case, but 
if you can parallelize disk access to both disks, your access time or 
response time to get the 1st row of the query result may be shorter.

Regards,
Dave


[EMAIL PROTECTED] wrote:
Dave,

during a 'db file sequential read', an index is _not_ accessed 
sequentially.
An index is not a sequential structure, so reading from an index in 
order will cause multiple seeks on the index itself. And we're talking 
single user here

regards,
Hans
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Date: Wed, 08 Oct 2003 16:29:25 -0800
Great responses ! Thanks very much ..

-Original Message-
Dave Hau
Sent: Wednesday, October 08, 2003 3:19 PM
To: Multiple recipients of list ORACLE-L
Hi Gaja,

I agree that throughput can always be improved by adding more drives to
the striped array.  However, this does not improve access time.  If you
have your tables and indexes on the same striped array, necessarily the
two I/O's have to be done sequentially, incurring two times access time
at a minimum.  However, if you separate the two into different arrays,
then you can access them in parallel, starting to get data from each
disk array in 1* access time.  This makes sense esp. in scenarios where
response time is more important than throughput, and also in use cases
where your access pattern is random rather than sequential.
So I feel that there's a tradeoff between access time and throughput.
If you have ten drives, and you stripe all of them into a single array
and put both your data and indexes onto this array, you get maximum
throughput but you're sacrificing access time for throughput.  However,
if you build two arrays each consisting of five drives, and put your
data and indexes onto each array, you get half of the previous
throughput, but you get better access time because now your data and
index access can be truly in parallel.
Regards,
Dave




[EMAIL PROTECTED] wrote:
  Hi Hans/Vikas,
 
  I tend to agree that the old draconian rule that thou
  shalt always separate indexes from tables may not
  apply any more. We used to apply that principle in the
  past when the number of available spindles was not
  adequate. Seems like with 256G drives in the market,
  we are being pushed back in time, in some way!!!
 
  The way I look at the problem is purely from an IOPS
  perspective. For example, if each physical disk is
  capable of 256 IOPS (ignore the cache configured here)
  and you have 10 disks in your volume, then the total
  I/O capacity on this volume is 2560 IOPS. Separation
  of objects across multiple volumes may becomes an
  issue, only when the demand for I/O outstrips the
  supply (in this case 2560 IOPS).
 
  Even then, you can always add more drives to the
  existing volume and restripe, i.e., adding 5 more
  drives to 10 drives increases the I/O capacity by 50%.
  At the end of the day, the I/O sub-system does not
  care, whether it is servicing a data segment, index
  segment or undo segment.
 
  But, in certain environments, that I have dealt with,
  there has been a need to separate heavily and
  concurrently accessed objects (does not matter whether
  these objects are all indexes or tables or both). This
  may be true only for certain objects and certain
  queries. So, please don't apply this in a blanket
  fashion.
 
  Empirical data is always the best justification
  mechnism for a configuration exercise such as this.
  Plus, you may have partitioning and other requirements
  such as parallelism that impact the placement and
  availability of your data. This in turn will control
  the number of logical volumes that need to be created.
 
  I think the idea and philosophy behind SAME is noble -
  Use all available drives, so that you do not have
  localized hot-spots. But the implementation of SAME
  and how many volumes you need in your enviroment, is a
  function of your custom needs based on your system and
  application demands. When you over-simplify something,
  you lose the flexibility. The art factor here (which
  requires some planning) is in achieving a balance
  between simplicity, flexibility, performance,
  

Re: SAME and separating disk and index tablespaces

2003-10-09 Thread David Hau
Isn't this true when the query is a parallel query, i.e. when you're 
doing a parallel index range scan using a partitioned index?  In such 
case, reading the index is the producer operation, and using the rowid 
to retrieve the row from the table is the consumer operation, and the 
data flow should be pipelined (inter-operation parallelism in Oracle 
speak) allowing concurrent access to both the index and the table.

- Dave



[EMAIL PROTECTED] wrote:
  If you have your tables and indexes on the same striped array, 
necessarily the
  two I/O's have to be done sequentially, incurring two times access time
  at a minimum.

This implies that putting the index and table on separate logical drives 
will
allow concurrent access to both the index and the table.

 From the perspective of a single transaction, this is not true.

When an index is read and the resulting rowids are used to retrieve rows 
from a table: these
operations do not occur concurrently.  The index blocks are read, then 
the table blocks.

Separating the table and index IO to different drives will not double 
the throughput or
the access time.

Given N drives, it would seem reasonable to expect the throughput for 
that single transaction
to be faster if those N drives were in a single array, rather than 2 
separate arrays assigned
to two different logical volumes.  This would be the case if the total 
data in the transaction were
larger than the stripe size used if 2 arrays/volumes were used rather 
than 1.

Jared



Dave Hau [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
 10/08/2003 04:19 PM
 Please respond to ORACLE-L
   
To:Multiple recipients of list ORACLE-L 
[EMAIL PROTECTED]
cc:
Subject:Re: SAME and separating disk and index tablespaces



Hi Gaja,

I agree that throughput can always be improved by adding more drives to
the striped array.  However, this does not improve access time.  If you
have your tables and indexes on the same striped array, necessarily the
two I/O's have to be done sequentially, incurring two times access time
at a minimum.  However, if you separate the two into different arrays,
then you can access them in parallel, starting to get data from each
disk array in 1* access time.  This makes sense esp. in scenarios where
response time is more important than throughput, and also in use cases
where your access pattern is random rather than sequential.
So I feel that there's a tradeoff between access time and throughput.
If you have ten drives, and you stripe all of them into a single array
and put both your data and indexes onto this array, you get maximum
throughput but you're sacrificing access time for throughput.  However,
if you build two arrays each consisting of five drives, and put your
data and indexes onto each array, you get half of the previous
throughput, but you get better access time because now your data and
index access can be truly in parallel.
Regards,
Dave




[EMAIL PROTECTED] wrote:
  Hi Hans/Vikas,
 
  I tend to agree that the old draconian rule that thou
  shalt always separate indexes from tables may not
  apply any more. We used to apply that principle in the
  past when the number of available spindles was not
  adequate. Seems like with 256G drives in the market,
  we are being pushed back in time, in some way!!!
 
  The way I look at the problem is purely from an IOPS
  perspective. For example, if each physical disk is
  capable of 256 IOPS (ignore the cache configured here)
  and you have 10 disks in your volume, then the total
  I/O capacity on this volume is 2560 IOPS. Separation
  of objects across multiple volumes may becomes an
  issue, only when the demand for I/O outstrips the
  supply (in this case 2560 IOPS).
 
  Even then, you can always add more drives to the
  existing volume and restripe, i.e., adding 5 more
  drives to 10 drives increases the I/O capacity by 50%.
  At the end of the day, the I/O sub-system does not
  care, whether it is servicing a data segment, index
  segment or undo segment.
 
  But, in certain environments, that I have dealt with,
  there has been a need to separate heavily and
  concurrently accessed objects (does not matter whether
  these objects are all indexes or tables or both). This
  may be true only for certain objects and certain
  queries. So, please don't apply this in a blanket
  fashion.
 
  Empirical data is always the best justification
  mechnism for a configuration exercise such as this.
  Plus, you may have partitioning and other requirements
  such as parallelism that impact the placement and
  availability of your data. This in turn will control
  the number of logical volumes that need to be created.
 
  I think the idea and philosophy behind SAME is noble -
  Use all available drives, so that you do not have
  localized hot-spots. But the implementation of SAME
  and how many volumes you need in your enviroment, is a
  function of your custom needs based on your system

Re: SAME and separating disk and index tablespaces

2003-10-09 Thread Nuno Souto
- Original Message - 

 the striped array.  However, this does not improve access time.  If you 
 have your tables and indexes on the same striped array, necessarily the 
 two I/O's have to be done sequentially, incurring two times access time 
 at a minimum.  However, if you separate the two into different arrays, 
 then you can access them in parallel, starting to get data from each 
 disk array in 1* access time. 

YOU can do that.  But Oracle doesn't.  Strictly: first indexes, then data
pointed to by those indexes.  So, no point from the performance perspective in
separating.  From other perspectives, heaps of reasons.  But NOT from the 
performance perspective.


Cheers
Nuno Souto
[EMAIL PROTECTED]
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Nuno Souto
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: SAME and separating disk and index tablespaces

2003-10-09 Thread Loughmiller, Greg
Title: RE: SAME and separating disk and index tablespaces






Maybe we have been lucky. But we use the SAME methodology. We have removed a considerable amount of human effort in regards to layout of datafiles and disk layout. And based on the stats that I have seen from the Storage team/SA's, we have *maybe* seen a 2-5% performance hit in access time. That info needs to be confirmed-but the disk technology has been improved so dramatically-that older rules of thumb aren't necessary for *specific* environments. There is still logical separation of tables/indexes as mentioned below.. But we typically present a single file system for the datafiles

Does this work in all cases for us - nope! But it covers a majority of the environments and we address the *unique* environments accordingly. But considering the number of databases and the volume of disk space - more effective for us.


just a comment:-)
greg




-Original Message-
From: vikas kawatra [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 08, 2003 8:29 PM
To: Multiple recipients of list ORACLE-L
Subject: RE: SAME and separating disk and index tablespaces



Great responses ! Thanks very much ..


-Original Message-
Dave Hau
Sent: Wednesday, October 08, 2003 3:19 PM
To: Multiple recipients of list ORACLE-L


Hi Gaja,


I agree that throughput can always be improved by adding more drives to 
the striped array. However, this does not improve access time. If you 
have your tables and indexes on the same striped array, necessarily the 
two I/O's have to be done sequentially, incurring two times access time 
at a minimum. However, if you separate the two into different arrays, 
then you can access them in parallel, starting to get data from each 
disk array in 1* access time. This makes sense esp. in scenarios where 
response time is more important than throughput, and also in use cases 
where your access pattern is random rather than sequential.


So I feel that there's a tradeoff between access time and throughput. 
If you have ten drives, and you stripe all of them into a single array 
and put both your data and indexes onto this array, you get maximum 
throughput but you're sacrificing access time for throughput. However, 
if you build two arrays each consisting of five drives, and put your 
data and indexes onto each array, you get half of the previous 
throughput, but you get better access time because now your data and 
index access can be truly in parallel.


Regards,
Dave






[EMAIL PROTECTED] wrote:
 Hi Hans/Vikas,
 
 I tend to agree that the old draconian rule that thou
 shalt always separate indexes from tables may not
 apply any more. We used to apply that principle in the
 past when the number of available spindles was not
 adequate. Seems like with 256G drives in the market,
 we are being pushed back in time, in some way!!!
 
 The way I look at the problem is purely from an IOPS
 perspective. For example, if each physical disk is
 capable of 256 IOPS (ignore the cache configured here)
 and you have 10 disks in your volume, then the total
 I/O capacity on this volume is 2560 IOPS. Separation
 of objects across multiple volumes may becomes an
 issue, only when the demand for I/O outstrips the
 supply (in this case 2560 IOPS).
 
 Even then, you can always add more drives to the
 existing volume and restripe, i.e., adding 5 more
 drives to 10 drives increases the I/O capacity by 50%.
 At the end of the day, the I/O sub-system does not
 care, whether it is servicing a data segment, index
 segment or undo segment. 
 
 But, in certain environments, that I have dealt with,
 there has been a need to separate heavily and
 concurrently accessed objects (does not matter whether
 these objects are all indexes or tables or both). This
 may be true only for certain objects and certain
 queries. So, please don't apply this in a blanket
 fashion. 
 
 Empirical data is always the best justification
 mechnism for a configuration exercise such as this.
 Plus, you may have partitioning and other requirements
 such as parallelism that impact the placement and
 availability of your data. This in turn will control
 the number of logical volumes that need to be created.
 
 I think the idea and philosophy behind SAME is noble -
 Use all available drives, so that you do not have
 localized hot-spots. But the implementation of SAME
 and how many volumes you need in your enviroment, is a
 function of your custom needs based on your system and
 application demands. When you over-simplify something,
 you lose the flexibility. The art factor here (which
 requires some planning) is in achieving a balance
 between simplicity, flexibility, performance,
 manageability and availability.
 
 
 Hope that helps,
 
 
 Gaja
 --- Hans de Git [EMAIL PROTECTED] wrote:
 
Vikas,

Spend an hour on reading this usenet thread:


 

http://groups.google.nl/groups?hl=nl==UTF-8=UTF-8=brjz8
.15%24707.245%40news.oracle.comrnum=1prev=/groups%3Fhl%3Dnl%26lr%3D%26
ie%3DUTF-8

RE: SAME and separating disk and index tablespaces

2003-10-09 Thread Thater, William
Gaja Krishna Vaidyanatha  scribbled on the wall in glitter crayon:

 Hi Hans/Vikas,
 
 I tend to agree that the old draconian rule that thou
 shalt always separate indexes from tables may not
 apply any more. We used to apply that principle in the
 past when the number of available spindles was not
 adequate. Seems like with 256G drives in the market,
 we are being pushed back in time, in some way!!!

however as was mentioned before, i do separate them for ease of operational
maintainence.  things like backups, rebuilds, and all the other things to
recover from user input.;-)  so call me old fashioned, just don't call me
late for the food.;-)

PS hi Gaja.;-)
--
Bill Shrek Thater ORACLE DBA  
I'm going to work my ticket if I can... -- Gilwell song
[EMAIL PROTECTED]

The grand aim of all science is to cover the greatest number of empirical
facts by logical deduction from the smallest number of hypotheses or axioms.
- Albert Einstein

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Thater, William
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: SAME and separating disk and index tablespaces

2003-10-09 Thread Cary Millsap
Title: RE: SAME and separating disk and index tablespaces









Good stuff. Plus, watch this:



If saving dozens of hours of labor cost
actually does cost a full 5% performance penalty on access time, and if reads
from disk account for 10% of total response time for a given user action, then
saving dozens of hours of labor cost will actually penalize total response time
of that user action by only  of one percent.



Probably not a bad tradeoff in many
situations. The key is to know what your
situation is. The way to figure that
out? Broken record says:



extended SQL trace data (10046
level 8 or 12).





Cary Millsap
Hotsos Enterprises, Ltd.
http://www.hotsos.com

Upcoming events:
- Performance
Diagnosis101: 10/28 Phoenix, 11/19 Sydney
- Hotsos Symposium 2004:
March 710 Dallas
- Visit www.hotsos.com for schedule
details...



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Loughmiller,
Greg
Sent: Thursday, October 09, 2003
8:30 AM
To: Multiple recipients of list
ORACLE-L
Subject: RE: SAME and separating
disk and index tablespaces





Maybe we have been lucky. But we use the SAME
methodology. We have removed a considerable amount of human effort in
regards to layout of datafiles and disk layout. And based on the stats that I
have seen from the Storage team/SA's, we have *maybe* seen a 2-5%
performance hit in access time. That info needs to be confirmed-but the disk
technology has been improved so dramatically-that older rules of
thumb aren't necessary for *specific* environments. There is still
logical separation of tables/indexes as mentioned below.. But we typically
present a single file system for the datafiles

Does this work in all cases for us - nope! But it
covers a majority of the environments and we address the *unique* environments
accordingly. But considering the number of databases and the volume of
disk space - more effective for us.



just a comment:-) 
greg 



-Original Message- 
From: vikas kawatra [mailto:[EMAIL PROTECTED]]

Sent: Wednesday, October 08, 2003
8:29 PM 
To: Multiple recipients of list
ORACLE-L 
Subject: RE: SAME and separating
disk and index tablespaces 



Great responses ! Thanks very much .. 

-Original Message- 
Dave Hau 
Sent: Wednesday, October 08, 2003
3:19 PM 
To: Multiple recipients of list
ORACLE-L 

Hi Gaja, 

I agree that throughput can always be improved by
adding more drives to 
the striped array. However,
this does not improve access time. If you 
have your tables and indexes on the
same striped array, necessarily the 
two I/O's have to be done
sequentially, incurring two times access time 
at a minimum. However, if you
separate the two into different arrays, 
then you can access them in
parallel, starting to get data from each 
disk array in 1* access time.
This makes sense esp. in scenarios where 
response time is more important
than throughput, and also in use cases 
where your access pattern is random
rather than sequential. 

So I feel that there's a tradeoff between access time
and throughput. 
If you have ten drives, and you
stripe all of them into a single array 
and put both your data and indexes
onto this array, you get maximum 
throughput but you're sacrificing
access time for throughput. However, 
if you build two arrays each
consisting of five drives, and put your 
data and indexes onto each array,
you get half of the previous 
throughput, but you get better access
time because now your data and 
index access can be truly in
parallel. 

Regards, 
Dave 






[EMAIL PROTECTED] wrote: 
 Hi Hans/Vikas, 
 
 I tend to agree that the old
draconian rule that thou 
 shalt always separate indexes
from tables may not 
 apply any more. We used to
apply that principle in the 
 past when the number of
available spindles was not 
 adequate. Seems like with 256G
drives in the market, 
 we are being pushed back in
time, in some way!!! 
 
 The way I look at the problem
is purely from an IOPS 
 perspective. For example, if
each physical disk is 
 capable of 256 IOPS (ignore
the cache configured here) 
 and you have 10 disks in your
volume, then the total 
 I/O capacity on this volume is
2560 IOPS. Separation 
 of objects across multiple
volumes may becomes an 
 issue, only when the demand
for I/O outstrips the 
 supply (in this case 2560
IOPS). 
 
 Even then, you can always add
more drives to the 
 existing volume and restripe,
i.e., adding 5 more 
 drives to 10 drives increases
the I/O capacity by 50%. 
 At the end of the day, the I/O
sub-system does not 
 care, whether it is servicing
a data segment, index 
 segment or undo segment. 
 
 But, in certain environments,
that I have dealt with, 
 there has been a need to
separate heavily and 
 concurrently accessed objects
(does not matter whether 
 these objects are all indexes
or tables or both). This 
 may be true only for certain
objects and certain 
 queries. So, please don't
apply this in a blanket 
 fashion. 
 
 Empirical data is always

Re: SAME and separating disk and index tablespaces

2003-10-09 Thread Gaja Krishna Vaidyanatha
David and list,

Some points to keep in mind in our discussion:

* Throughput can be measured both by IOPS and MB./sec,
it depends on the application, whether it is purely
transactional or is just hauling loads of data. 

Access time normally is a function of the number of
I/O operations the disk volume can support, the type
of I/O demand (the mixture of random versus sequential
I/O requests on both reads and writes) and the
physical location of the data that is being accessed
(outer tracks versus the inner tracks). Thus IOPS is
very much relevant to our discussion, as statistically
speaking, given the capacity of n IOPS, if x is the
acutal number of IOPS being serviced by the volume,
lower the value of x, lower will be the probability is
for access times to be out of whack.

* There is disk technology available today that
supports multiple actuators, which has a definite
positive impact on access times. Again, IOPS has a
play in this too.

* Almost all mirroring technologies that I have
encountered provide for the following 2 important
features:
  - Doubling of IOPS as compared to a non-mirrored
enviroment (due to double the drives)
  - Intelligent servicing of I/O requests across the
mirrored members (some use the round-robin algorithm,
others use the least-busy algorithm). This is the
single most salient point that should be highlighted
in our discussion. I should have brought this up in my
original posting. Better late than never!
  - Redundancy. This feature is an obvious one, which
again is relevant to our discussion, because I don't
think any of us are going to just implement a purely
striped volume with NO mirroring.

So assuming a mirrored volume (say 4-way stripe), that
consumes a total of 8 drives, it can be argued (or
even proved) that placing the objects in the volume
based on IOPS, will suffice, regardless of whether the
object type is - data, index, undo or temp.

This is because, in a mirrored volume, say on an index
scan, the I/O operation for the index block can be
serviced from one member of the mirrored volume and
subsequently the I/O operation for the data block can
be serviced from the other member of the mirrored
volume. So the cost of seeking different parts of the
same disk, can be avoided. We don't eliminate any
seeks, it just that the seeks that we do perform are
much cheaper, as the amount of head movement is
minimized.

I think this provides pretty much a similar scenario
as proposed by you of having 2 different volumes with
5 drives each, instead of having 1 volume with 10
drives. Again, I should have originally highlighted
this in our discussion of adding more drives to a
volume. Hope that clarifies things a bit.


Cheers,


Gaja
--- David Hau [EMAIL PROTECTED] wrote:
 Hans,
 
 Your statement is true except in the case of a fast
 full-index scan. 
 But that's not my point.  What I'm trying to say is:
 
 1.  In scenarios where response time is important,
 for example when you 
 want to obtain the first n rows of a query result as
 quickly as 
 possible, then access time may be as important as
 throughput.
 
 2.  Adding disks to a striped array only improves
 throughput, not access 
 time.
 
 3.  Access time can be improved by parallel I/O
 execution on separate 
 disk arrays.
 
 I'm trying to point out the difference between:
 
 1.  striping 10 disks into a single array
 2.  striping 10 disks into two arrays of 5 disks
 each
 
 In the first case, you get max throughput but
 because you only have one 
 array, you cannot improve access time by
 parallelizing disk access.
 
 In the second case, you get half the throughput of
 the first case, but 
 if you can parallelize disk access to both disks,
 your access time or 
 response time to get the 1st row of the query result
 may be shorter.
 
 Regards,
 Dave
 
 
 
 
 [EMAIL PROTECTED] wrote:
  Dave,
  
  during a 'db file sequential read', an index is
 _not_ accessed 
  sequentially.
  An index is not a sequential structure, so reading
 from an index in 
  order will cause multiple seeks on the index
 itself. And we're talking 
  single user here
  
  regards,
  Hans
  
  
  Reply-To: [EMAIL PROTECTED]
  To: Multiple recipients of list ORACLE-L
 [EMAIL PROTECTED]
  Date: Wed, 08 Oct 2003 16:29:25 -0800
  
  Great responses ! Thanks very much ..
  
  -Original Message-
  Dave Hau
  Sent: Wednesday, October 08, 2003 3:19 PM
  To: Multiple recipients of list ORACLE-L
  
  Hi Gaja,
  
  I agree that throughput can always be improved by
 adding more drives to
  the striped array.  However, this does not improve
 access time.  If you
  have your tables and indexes on the same striped
 array, necessarily the
  two I/O's have to be done sequentially, incurring
 two times access time
  at a minimum.  However, if you separate the two
 into different arrays,
  then you can access them in parallel, starting to
 get data from each
  disk array in 1* access time.  This makes sense
 esp. in scenarios where
  response time is more 

Re: SAME and separating disk and index tablespaces

2003-10-09 Thread Paul Drake
Gaja,

No cups in this analogy?

I'm thoroughly dissappointed.

I would have liked the idea of a twelve pack that can only provide one can at a time, vs. 2 six packs that could each provide a beer.

Paul

Gaja Krishna Vaidyanatha [EMAIL PROTECTED] wrote:

David and list,I think this provides pretty much a similar scenarioas proposed by you of having 2 different volumes with5 drives each, instead of having 1 volume with 10drives. Again, I should have originally highlightedthis in our discussion of adding more drives to avolume. Hope that clarifies things a bit.Cheers,Gaja
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search

Re: SAME and separating disk and index tablespaces

2003-10-08 Thread Gaja Krishna Vaidyanatha
Vikas,

The answer is an enthusiastic yes. This is purely from
an administrative and manageability standpoint. For
example, if you have INDEX and DATA segments separated
in 2 different tablespaces, the backup of these
tablespaces can be done INDEPENDENTLY. This is
relevant, as if you were to rebuild your indexes using
the NOLOGGING option between 2 backup jobs. If that
were the case, then all you will need to do after the
rebuild is complete, is to backup only the INDX
tablespace.

This is a best practice (if not a requirement) in most
production shops, unless you think you can re-re-build
your indexes in the event of media failure and you
lose your INDX tablespace.


Hope that helps,


Gaja
--- vikas kawatra [EMAIL PROTECTED] wrote:
 Guys,
 
 Does it make sense to separate data and index
 segments into separate
 tablespaces if you create a single logical volume
 and all files are
 striped using the SAME methodology ?
 
 Thanks
 
 vikas
 
 
 -- 
 Please see the official ORACLE-L FAQ:
 http://www.orafaq.net
 -- 
 Author: vikas kawatra
   INET: [EMAIL PROTECTED]
 


=
Gaja Krishna Vaidyanatha
Principal Technical Product Manager, 
Application Performance Management, Veritas Corporation
E-mail : [EMAIL PROTECTED]  Phone: (650)-527-3180
Website: http://www.veritas.com

__
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Gaja Krishna Vaidyanatha
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: SAME and separating disk and index tablespaces

2003-10-08 Thread vikas kawatra
Thanks Gaja !  Does it also make sense from a performance perspective
(I/O issues due to concurrent access of index and data ) to separate
them or is that point moot once you apply the SAME methodology ? 

-Original Message-
Gaja Krishna Vaidyanatha
Sent: Wednesday, October 08, 2003 9:24 AM
To: Multiple recipients of list ORACLE-L

Vikas,

The answer is an enthusiastic yes. This is purely from
an administrative and manageability standpoint. For
example, if you have INDEX and DATA segments separated
in 2 different tablespaces, the backup of these
tablespaces can be done INDEPENDENTLY. This is
relevant, as if you were to rebuild your indexes using
the NOLOGGING option between 2 backup jobs. If that
were the case, then all you will need to do after the
rebuild is complete, is to backup only the INDX
tablespace.

This is a best practice (if not a requirement) in most
production shops, unless you think you can re-re-build
your indexes in the event of media failure and you
lose your INDX tablespace.


Hope that helps,


Gaja
--- vikas kawatra [EMAIL PROTECTED] wrote:
 Guys,
 
 Does it make sense to separate data and index
 segments into separate
 tablespaces if you create a single logical volume
 and all files are
 striped using the SAME methodology ?
 
 Thanks
 
 vikas
 
 
 -- 
 Please see the official ORACLE-L FAQ:
 http://www.orafaq.net
 -- 
 Author: vikas kawatra
   INET: [EMAIL PROTECTED]
 


=
Gaja Krishna Vaidyanatha
Principal Technical Product Manager, 
Application Performance Management, Veritas Corporation
E-mail : [EMAIL PROTECTED]  Phone: (650)-527-3180
Website: http://www.veritas.com

__
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Gaja Krishna Vaidyanatha
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: vikas kawatra
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: SAME and separating disk and index tablespaces

2003-10-08 Thread Hans de Git
Vikas,

Spend an hour on reading this usenet thread:

http://groups.google.nl/groups?hl=nllr=ie=UTF-8oe=UTF-8threadm=brjz8.15%24707.245%40news.oracle.comrnum=1prev=/groups%3Fhl%3Dnl%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26q%3Drogers%2Bseparate%2Bdata%2Bindex

It will open your eyes about separating data/index.

Still not sure about the redolog stream...Because of the sequential nature 
of redologfiles. I've read  tests that 'prove' it doesn't matter much 
whether you separate your redolog from 'ordinary' datafiles or not. It does 
simplify things when you pure SAME.

Regards,
Hans
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Date: Wed, 08 Oct 2003 09:54:30 -0800
Thanks Gaja !  Does it also make sense from a performance perspective
(I/O issues due to concurrent access of index and data ) to separate
them or is that point moot once you apply the SAME methodology ?
-Original Message-
Gaja Krishna Vaidyanatha
Sent: Wednesday, October 08, 2003 9:24 AM
To: Multiple recipients of list ORACLE-L
Vikas,

The answer is an enthusiastic yes. This is purely from
an administrative and manageability standpoint. For
example, if you have INDEX and DATA segments separated
in 2 different tablespaces, the backup of these
tablespaces can be done INDEPENDENTLY. This is
relevant, as if you were to rebuild your indexes using
the NOLOGGING option between 2 backup jobs. If that
were the case, then all you will need to do after the
rebuild is complete, is to backup only the INDX
tablespace.
This is a best practice (if not a requirement) in most
production shops, unless you think you can re-re-build
your indexes in the event of media failure and you
lose your INDX tablespace.
Hope that helps,

Gaja
--- vikas kawatra [EMAIL PROTECTED] wrote:
 Guys,

 Does it make sense to separate data and index
 segments into separate
 tablespaces if you create a single logical volume
 and all files are
 striped using the SAME methodology ?

 Thanks

 vikas


 --
 Please see the official ORACLE-L FAQ:
 http://www.orafaq.net
 --
 Author: vikas kawatra
   INET: [EMAIL PROTECTED]

=
Gaja Krishna Vaidyanatha
Principal Technical Product Manager,
Application Performance Management, Veritas Corporation
E-mail : [EMAIL PROTECTED]  Phone: (650)-527-3180
Website: http://www.veritas.com
__
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Gaja Krishna Vaidyanatha
  INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: vikas kawatra
  INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
_
Chatten met je online vrienden via MSN Messenger. http://messenger.msn.nl/
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Hans de Git
 INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


RE: SAME and separating disk and index tablespaces

2003-10-08 Thread Gaja Krishna Vaidyanatha
Hi Hans/Vikas,

I tend to agree that the old draconian rule that thou
shalt always separate indexes from tables may not
apply any more. We used to apply that principle in the
past when the number of available spindles was not
adequate. Seems like with 256G drives in the market,
we are being pushed back in time, in some way!!!

The way I look at the problem is purely from an IOPS
perspective. For example, if each physical disk is
capable of 256 IOPS (ignore the cache configured here)
and you have 10 disks in your volume, then the total
I/O capacity on this volume is 2560 IOPS. Separation
of objects across multiple volumes may becomes an
issue, only when the demand for I/O outstrips the
supply (in this case 2560 IOPS).

Even then, you can always add more drives to the
existing volume and restripe, i.e., adding 5 more
drives to 10 drives increases the I/O capacity by 50%.
At the end of the day, the I/O sub-system does not
care, whether it is servicing a data segment, index
segment or undo segment. 

But, in certain environments, that I have dealt with,
there has been a need to separate heavily and
concurrently accessed objects (does not matter whether
these objects are all indexes or tables or both). This
may be true only for certain objects and certain
queries. So, please don't apply this in a blanket
fashion. 

Empirical data is always the best justification
mechnism for a configuration exercise such as this.
Plus, you may have partitioning and other requirements
such as parallelism that impact the placement and
availability of your data. This in turn will control
the number of logical volumes that need to be created.

I think the idea and philosophy behind SAME is noble -
Use all available drives, so that you do not have
localized hot-spots. But the implementation of SAME
and how many volumes you need in your enviroment, is a
function of your custom needs based on your system and
application demands. When you over-simplify something,
you lose the flexibility. The art factor here (which
requires some planning) is in achieving a balance
between simplicity, flexibility, performance,
manageability and availability.


Hope that helps,


Gaja
--- Hans de Git [EMAIL PROTECTED] wrote:
 Vikas,
 
 Spend an hour on reading this usenet thread:
 

http://groups.google.nl/groups?hl=nllr=ie=UTF-8oe=UTF-8threadm=brjz8.15%24707.245%40news.oracle.comrnum=1prev=/groups%3Fhl%3Dnl%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26q%3Drogers%2Bseparate%2Bdata%2Bindex
 
 It will open your eyes about separating data/index.
 
 Still not sure about the redolog stream...Because of
 the sequential nature 
 of redologfiles. I've read  tests that 'prove' it
 doesn't matter much 
 whether you separate your redolog from 'ordinary'
 datafiles or not. It does 
 simplify things when you pure SAME.
 
 Regards,
 Hans
 
 
 Reply-To: [EMAIL PROTECTED]
 To: Multiple recipients of list ORACLE-L
 [EMAIL PROTECTED]
 Date: Wed, 08 Oct 2003 09:54:30 -0800
 
 Thanks Gaja !  Does it also make sense from a
 performance perspective
 (I/O issues due to concurrent access of index and
 data ) to separate
 them or is that point moot once you apply the SAME
 methodology ?
 
 -Original Message-
 Gaja Krishna Vaidyanatha
 Sent: Wednesday, October 08, 2003 9:24 AM
 To: Multiple recipients of list ORACLE-L
 
 Vikas,
 
 The answer is an enthusiastic yes. This is purely
 from
 an administrative and manageability standpoint. For
 example, if you have INDEX and DATA segments
 separated
 in 2 different tablespaces, the backup of these
 tablespaces can be done INDEPENDENTLY. This is
 relevant, as if you were to rebuild your indexes
 using
 the NOLOGGING option between 2 backup jobs. If that
 were the case, then all you will need to do after
 the
 rebuild is complete, is to backup only the INDX
 tablespace.
 
 This is a best practice (if not a requirement) in
 most
 production shops, unless you think you can
 re-re-build
 your indexes in the event of media failure and you
 lose your INDX tablespace.
 
 
 Hope that helps,
 
 
 Gaja
 --- vikas kawatra [EMAIL PROTECTED] wrote:
   Guys,
  
   Does it make sense to separate data and index
   segments into separate
   tablespaces if you create a single logical volume
   and all files are
   striped using the SAME methodology ?
  
   Thanks
  
   vikas
  
  
   --
   Please see the official ORACLE-L FAQ:
   http://www.orafaq.net
   --
   Author: vikas kawatra
 INET: [EMAIL PROTECTED]
  
 
 
 =
 Gaja Krishna Vaidyanatha
 Principal Technical Product Manager,
 Application Performance Management, Veritas
 Corporation
 E-mail : [EMAIL PROTECTED]  Phone: (650)-527-3180
 Website: http://www.veritas.com
 
 __
 Do you Yahoo!?
 The New Yahoo! Shopping - with improved product
 search
 http://shopping.yahoo.com
 --
 Please see the official ORACLE-L FAQ:
 http://www.orafaq.net
 --
 Author: Gaja Krishna Vaidyanatha
INET: [EMAIL PROTECTED]
 
 Fat City Network Services-- 858-538-5051
 

RE: SAME and separating disk and index tablespaces

2003-10-08 Thread Cary Millsap
I completely agree with Howard about one thing: indexes and data don't
necessarily compete against each other for I/O capacity.

So the following is *not* true: You should separate indexes and data
into different tablespaces because they compete so strenuously for I/O
capacity.

However, the following *is* true: You should separate indexes and data
into different tablespaces because there are several legitimate reasons
to do so.

http://www.hotsos.com/dnloads/0.Millsap1995.09.24-OFA.pdf for more
information.


Cary Millsap
Hotsos Enterprises, Ltd.
http://www.hotsos.com

Upcoming events:
- Performance Diagnosis 101: 10/28 Phoenix, 11/19 Sydney
- Hotsos Symposium 2004: March 7-10 Dallas
- Visit www.hotsos.com for schedule details...


-Original Message-
Hans de Git
Sent: Wednesday, October 08, 2003 2:50 PM
To: Multiple recipients of list ORACLE-L

Vikas,

Spend an hour on reading this usenet thread:

http://groups.google.nl/groups?hl=nllr=ie=UTF-8oe=UTF-8threadm=brjz8
.15%24707.245%40news.oracle.comrnum=1prev=/groups%3Fhl%3Dnl%26lr%3D%26
ie%3DUTF-8%26oe%3DUTF-8%26q%3Drogers%2Bseparate%2Bdata%2Bindex

It will open your eyes about separating data/index.

Still not sure about the redolog stream...Because of the sequential
nature 
of redologfiles. I've read  tests that 'prove' it doesn't matter much 
whether you separate your redolog from 'ordinary' datafiles or not. It
does 
simplify things when you pure SAME.

Regards,
Hans


Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Date: Wed, 08 Oct 2003 09:54:30 -0800

Thanks Gaja !  Does it also make sense from a performance perspective
(I/O issues due to concurrent access of index and data ) to separate
them or is that point moot once you apply the SAME methodology ?

-Original Message-
Gaja Krishna Vaidyanatha
Sent: Wednesday, October 08, 2003 9:24 AM
To: Multiple recipients of list ORACLE-L

Vikas,

The answer is an enthusiastic yes. This is purely from
an administrative and manageability standpoint. For
example, if you have INDEX and DATA segments separated
in 2 different tablespaces, the backup of these
tablespaces can be done INDEPENDENTLY. This is
relevant, as if you were to rebuild your indexes using
the NOLOGGING option between 2 backup jobs. If that
were the case, then all you will need to do after the
rebuild is complete, is to backup only the INDX
tablespace.

This is a best practice (if not a requirement) in most
production shops, unless you think you can re-re-build
your indexes in the event of media failure and you
lose your INDX tablespace.


Hope that helps,


Gaja
--- vikas kawatra [EMAIL PROTECTED] wrote:
  Guys,
 
  Does it make sense to separate data and index
  segments into separate
  tablespaces if you create a single logical volume
  and all files are
  striped using the SAME methodology ?
 
  Thanks
 
  vikas
 
 
  --
  Please see the official ORACLE-L FAQ:
  http://www.orafaq.net
  --
  Author: vikas kawatra
INET: [EMAIL PROTECTED]
 


=
Gaja Krishna Vaidyanatha
Principal Technical Product Manager,
Application Performance Management, Veritas Corporation
E-mail : [EMAIL PROTECTED]  Phone: (650)-527-3180
Website: http://www.veritas.com

__
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Gaja Krishna Vaidyanatha
   INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).


--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: vikas kawatra
   INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
-
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

_
Chatten met je online vrienden via MSN Messenger.
http://messenger.msn.nl/

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Hans de Git
  INET: [EMAIL PROTECTED]

Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San 

Re: SAME and separating disk and index tablespaces

2003-10-08 Thread Dave Hau
Hi Gaja,

I agree that throughput can always be improved by adding more drives to 
the striped array.  However, this does not improve access time.  If you 
have your tables and indexes on the same striped array, necessarily the 
two I/O's have to be done sequentially, incurring two times access time 
at a minimum.  However, if you separate the two into different arrays, 
then you can access them in parallel, starting to get data from each 
disk array in 1* access time.  This makes sense esp. in scenarios where 
response time is more important than throughput, and also in use cases 
where your access pattern is random rather than sequential.

So I feel that there's a tradeoff between access time and throughput. 
If you have ten drives, and you stripe all of them into a single array 
and put both your data and indexes onto this array, you get maximum 
throughput but you're sacrificing access time for throughput.  However, 
if you build two arrays each consisting of five drives, and put your 
data and indexes onto each array, you get half of the previous 
throughput, but you get better access time because now your data and 
index access can be truly in parallel.

Regards,
Dave




[EMAIL PROTECTED] wrote:
Hi Hans/Vikas,

I tend to agree that the old draconian rule that thou
shalt always separate indexes from tables may not
apply any more. We used to apply that principle in the
past when the number of available spindles was not
adequate. Seems like with 256G drives in the market,
we are being pushed back in time, in some way!!!
The way I look at the problem is purely from an IOPS
perspective. For example, if each physical disk is
capable of 256 IOPS (ignore the cache configured here)
and you have 10 disks in your volume, then the total
I/O capacity on this volume is 2560 IOPS. Separation
of objects across multiple volumes may becomes an
issue, only when the demand for I/O outstrips the
supply (in this case 2560 IOPS).
Even then, you can always add more drives to the
existing volume and restripe, i.e., adding 5 more
drives to 10 drives increases the I/O capacity by 50%.
At the end of the day, the I/O sub-system does not
care, whether it is servicing a data segment, index
segment or undo segment. 

But, in certain environments, that I have dealt with,
there has been a need to separate heavily and
concurrently accessed objects (does not matter whether
these objects are all indexes or tables or both). This
may be true only for certain objects and certain
queries. So, please don't apply this in a blanket
fashion. 

Empirical data is always the best justification
mechnism for a configuration exercise such as this.
Plus, you may have partitioning and other requirements
such as parallelism that impact the placement and
availability of your data. This in turn will control
the number of logical volumes that need to be created.
I think the idea and philosophy behind SAME is noble -
Use all available drives, so that you do not have
localized hot-spots. But the implementation of SAME
and how many volumes you need in your enviroment, is a
function of your custom needs based on your system and
application demands. When you over-simplify something,
you lose the flexibility. The art factor here (which
requires some planning) is in achieving a balance
between simplicity, flexibility, performance,
manageability and availability.
Hope that helps,

Gaja
--- Hans de Git [EMAIL PROTECTED] wrote:
Vikas,

Spend an hour on reading this usenet thread:


http://groups.google.nl/groups?hl=nllr=ie=UTF-8oe=UTF-8threadm=brjz8.15%24707.245%40news.oracle.comrnum=1prev=/groups%3Fhl%3Dnl%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26q%3Drogers%2Bseparate%2Bdata%2Bindex

It will open your eyes about separating data/index.

Still not sure about the redolog stream...Because of
the sequential nature 
of redologfiles. I've read  tests that 'prove' it
doesn't matter much 
whether you separate your redolog from 'ordinary'
datafiles or not. It does 
simplify things when you pure SAME.

Regards,
Hans
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L
[EMAIL PROTECTED]
Date: Wed, 08 Oct 2003 09:54:30 -0800
Thanks Gaja !  Does it also make sense from a
performance perspective
(I/O issues due to concurrent access of index and
data ) to separate
them or is that point moot once you apply the SAME
methodology ?
-Original Message-
Gaja Krishna Vaidyanatha
Sent: Wednesday, October 08, 2003 9:24 AM
To: Multiple recipients of list ORACLE-L
Vikas,

The answer is an enthusiastic yes. This is purely
from
an administrative and manageability standpoint. For
example, if you have INDEX and DATA segments
separated
in 2 different tablespaces, the backup of these
tablespaces can be done INDEPENDENTLY. This is
relevant, as if you were to rebuild your indexes
using
the NOLOGGING option between 2 backup jobs. If that
were the case, then all you will need to do after
the
rebuild is complete, is to backup only the INDX
tablespace.
This is a best practice (if not 

RE: SAME and separating disk and index tablespaces

2003-10-08 Thread vikas kawatra
Great responses ! Thanks very much ..

-Original Message-
Dave Hau
Sent: Wednesday, October 08, 2003 3:19 PM
To: Multiple recipients of list ORACLE-L

Hi Gaja,

I agree that throughput can always be improved by adding more drives to 
the striped array.  However, this does not improve access time.  If you 
have your tables and indexes on the same striped array, necessarily the 
two I/O's have to be done sequentially, incurring two times access time 
at a minimum.  However, if you separate the two into different arrays, 
then you can access them in parallel, starting to get data from each 
disk array in 1* access time.  This makes sense esp. in scenarios where 
response time is more important than throughput, and also in use cases 
where your access pattern is random rather than sequential.

So I feel that there's a tradeoff between access time and throughput. 
If you have ten drives, and you stripe all of them into a single array 
and put both your data and indexes onto this array, you get maximum 
throughput but you're sacrificing access time for throughput.  However, 
if you build two arrays each consisting of five drives, and put your 
data and indexes onto each array, you get half of the previous 
throughput, but you get better access time because now your data and 
index access can be truly in parallel.

Regards,
Dave





[EMAIL PROTECTED] wrote:
 Hi Hans/Vikas,
 
 I tend to agree that the old draconian rule that thou
 shalt always separate indexes from tables may not
 apply any more. We used to apply that principle in the
 past when the number of available spindles was not
 adequate. Seems like with 256G drives in the market,
 we are being pushed back in time, in some way!!!
 
 The way I look at the problem is purely from an IOPS
 perspective. For example, if each physical disk is
 capable of 256 IOPS (ignore the cache configured here)
 and you have 10 disks in your volume, then the total
 I/O capacity on this volume is 2560 IOPS. Separation
 of objects across multiple volumes may becomes an
 issue, only when the demand for I/O outstrips the
 supply (in this case 2560 IOPS).
 
 Even then, you can always add more drives to the
 existing volume and restripe, i.e., adding 5 more
 drives to 10 drives increases the I/O capacity by 50%.
 At the end of the day, the I/O sub-system does not
 care, whether it is servicing a data segment, index
 segment or undo segment. 
 
 But, in certain environments, that I have dealt with,
 there has been a need to separate heavily and
 concurrently accessed objects (does not matter whether
 these objects are all indexes or tables or both). This
 may be true only for certain objects and certain
 queries. So, please don't apply this in a blanket
 fashion. 
 
 Empirical data is always the best justification
 mechnism for a configuration exercise such as this.
 Plus, you may have partitioning and other requirements
 such as parallelism that impact the placement and
 availability of your data. This in turn will control
 the number of logical volumes that need to be created.
 
 I think the idea and philosophy behind SAME is noble -
 Use all available drives, so that you do not have
 localized hot-spots. But the implementation of SAME
 and how many volumes you need in your enviroment, is a
 function of your custom needs based on your system and
 application demands. When you over-simplify something,
 you lose the flexibility. The art factor here (which
 requires some planning) is in achieving a balance
 between simplicity, flexibility, performance,
 manageability and availability.
 
 
 Hope that helps,
 
 
 Gaja
 --- Hans de Git [EMAIL PROTECTED] wrote:
 
Vikas,

Spend an hour on reading this usenet thread:


 

http://groups.google.nl/groups?hl=nllr=ie=UTF-8oe=UTF-8threadm=brjz8
.15%24707.245%40news.oracle.comrnum=1prev=/groups%3Fhl%3Dnl%26lr%3D%26
ie%3DUTF-8%26oe%3DUTF-8%26q%3Drogers%2Bseparate%2Bdata%2Bindex
 
It will open your eyes about separating data/index.

Still not sure about the redolog stream...Because of
the sequential nature 
of redologfiles. I've read  tests that 'prove' it
doesn't matter much 
whether you separate your redolog from 'ordinary'
datafiles or not. It does 
simplify things when you pure SAME.

Regards,
Hans


Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L
[EMAIL PROTECTED]
Date: Wed, 08 Oct 2003 09:54:30 -0800

Thanks Gaja !  Does it also make sense from a
performance perspective
(I/O issues due to concurrent access of index and
data ) to separate
them or is that point moot once you apply the SAME
methodology ?

-Original Message-
Gaja Krishna Vaidyanatha
Sent: Wednesday, October 08, 2003 9:24 AM
To: Multiple recipients of list ORACLE-L

Vikas,

The answer is an enthusiastic yes. This is purely
from
an administrative and manageability standpoint. For
example, if you have INDEX and DATA segments
separated
in 2 different tablespaces, the backup of these
tablespaces can be done INDEPENDENTLY. This is
relevant, as if 

Re: SAME and separating disk and index tablespaces

2003-10-08 Thread Jared . Still

 If you have your tables and indexes on the same striped array, necessarily the 
 two I/O's have to be done sequentially, incurring two times access time 
 at a minimum.

This implies that putting the index and table on separate logical drives will
allow concurrent access to both the index and the table.

From the perspective of a single transaction, this is not true.

When an index is read and the resulting rowids are used to retrieve rows from a table: these
operations do not occur concurrently. The index blocks are read, then the table blocks. 

Separating the table and index IO to different drives will not double the throughput or
the access time.

Given N drives, it would seem reasonable to expect the throughput for that single transaction 
to be faster if those N drives were in a single array, rather than 2 separate arrays assigned 
to two different logical volumes. This would be the case if the total data in the transaction were 
larger than the stripe size used if 2 arrays/volumes were used rather than 1.


Jared








Dave Hau [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
10/08/2003 04:19 PM
Please respond to ORACLE-L


To:Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
cc:
Subject:Re: SAME and separating disk and index tablespaces


Hi Gaja,

I agree that throughput can always be improved by adding more drives to 
the striped array. However, this does not improve access time. If you 
have your tables and indexes on the same striped array, necessarily the 
two I/O's have to be done sequentially, incurring two times access time 
at a minimum. However, if you separate the two into different arrays, 
then you can access them in parallel, starting to get data from each 
disk array in 1* access time. This makes sense esp. in scenarios where 
response time is more important than throughput, and also in use cases 
where your access pattern is random rather than sequential.

So I feel that there's a tradeoff between access time and throughput. 
If you have ten drives, and you stripe all of them into a single array 
and put both your data and indexes onto this array, you get maximum 
throughput but you're sacrificing access time for throughput. However, 
if you build two arrays each consisting of five drives, and put your 
data and indexes onto each array, you get half of the previous 
throughput, but you get better access time because now your data and 
index access can be truly in parallel.

Regards,
Dave





[EMAIL PROTECTED] wrote:
 Hi Hans/Vikas,
 
 I tend to agree that the old draconian rule that thou
 shalt always separate indexes from tables may not
 apply any more. We used to apply that principle in the
 past when the number of available spindles was not
 adequate. Seems like with 256G drives in the market,
 we are being pushed back in time, in some way!!!
 
 The way I look at the problem is purely from an IOPS
 perspective. For example, if each physical disk is
 capable of 256 IOPS (ignore the cache configured here)
 and you have 10 disks in your volume, then the total
 I/O capacity on this volume is 2560 IOPS. Separation
 of objects across multiple volumes may becomes an
 issue, only when the demand for I/O outstrips the
 supply (in this case 2560 IOPS).
 
 Even then, you can always add more drives to the
 existing volume and restripe, i.e., adding 5 more
 drives to 10 drives increases the I/O capacity by 50%.
 At the end of the day, the I/O sub-system does not
 care, whether it is servicing a data segment, index
 segment or undo segment. 
 
 But, in certain environments, that I have dealt with,
 there has been a need to separate heavily and
 concurrently accessed objects (does not matter whether
 these objects are all indexes or tables or both). This
 may be true only for certain objects and certain
 queries. So, please don't apply this in a blanket
 fashion. 
 
 Empirical data is always the best justification
 mechnism for a configuration exercise such as this.
 Plus, you may have partitioning and other requirements
 such as parallelism that impact the placement and
 availability of your data. This in turn will control
 the number of logical volumes that need to be created.
 
 I think the idea and philosophy behind SAME is noble -
 Use all available drives, so that you do not have
 localized hot-spots. But the implementation of SAME
 and how many volumes you need in your enviroment, is a
 function of your custom needs based on your system and
 application demands. When you over-simplify something,
 you lose the flexibility. The art factor here (which
 requires some planning) is in achieving a balance
 between simplicity, flexibility, performance,
 manageability and availability.
 
 
 Hope that helps,
 
 
 Gaja
 --- Hans de Git [EMAIL PROTECTED] wrote:
 
Vikas,

Spend an hour on reading this usenet thread:


 
 http://groups.google.nl/groups?hl=nllr=ie=UTF-8oe=UTF-8threadm=brjz8.15%24707.245%40news.oracle.comrnum=1prev=/groups%3Fhl%3Dnl%26lr%3D%26ie%3DUTF