Stripping??? I hope you meant striping, otherwise this discussion could be
taking a VERY unusual turn!
Paul
-Original Message-
Sent: 06 March 2002 01:38
To: Multiple recipients of list ORACLE-L
Kevin, let me introduce you to the world of stripping. Course, if
you are on old hardware
We use 4Gb datafiles here as the norm without any problems at all and those
datafiles are all backed up with Legato. No problems whatsoever.
Lee
-Original Message-
Sent: 05 March 2002 03:33
To: Multiple recipients of list ORACLE-L
That being said is there anything wrong with having
I use 10gb datafiles for a 1tb db and also back up using Legato. Thinking about using
Rman :)
[EMAIL PROTECTED] 03/05/02 03:18AM
We use 4Gb datafiles here as the norm without any problems at all and those
datafiles are all backed up with Legato. No problems whatsoever.
Lee
-Original
Hi Kathy,
The only thing I can think of for your original question is a bad guess on
the file size. They
guessed 500 mb, ran out of file space, added a 50 mb file to the Tablespace,
ran out again
added 50 mb again. never ran out again.
Reading other dba's minds is so much fun :-)
John
Not using the RBS tablespace as the tablespace of discussion because it
has special requirements and can create a lot of discussion.
I can fore see a reason for using multiple datafiles in a tablespace.
Lets say that you have a large table than contains information based on
dates. you load the
Do you know of a web site where I could research particular thought
on? I have not seen anything like that in my research. Thanks.
-Original Message-
[EMAIL PROTECTED]
Sent: Monday, March 04, 2002 8:53 PM
To: Multiple recipients of list ORACLE-L
In a UNIX system it is better to have
kirti.deshpande@veTo: Multiple recipients
of list ORACLE-L [EMAIL PROTECTED]
rizon.com cc:
Sent by: Subject: RE: # of
datafiles per tablespace
[EMAIL PROTECTED
FWIW, there is a strong case for keeping consistent datafile sizes, similar
to the argument for extent sizes. This makes for easier file exchanges for
hot to not-so-hot disks, or copying the database to a new system. And
segment extent size should be kept in mind, i.e., you don't want to be
Using Sql Backtrack for backups, you are able execute the backups of multiple
datafiles in parallel. Therefore, it will be faster to backup 4-1g files
rather than 1-4g file if you have the necessary hardware in place.
4:58 AM
PST
Please respond to [EMAIL PROTECTED]
Sent by:
I had the opportunity to work with a very good sys
admin. We used raw on an EMC Sym and managed it all
with Veritas. We both decided to keep our datafiles
no bigger than 1GB regardless of TS size and at least
4 datafiles per TS. We used 36GB drives in our Sym,
each divided into 9GB LUNs. Our
Ah, but we use partitioning. However, the design you described is
slightly flawed me thinks. I had to do something similar at the last
job and what we did is have a separate tablespace for each month, which
in turn produces a separate data file of course. Not that there was
anything wrong in
Hi Kim,
Ahamed Alomari (Oracle8i and Unix Performance Tuning p142-143)discusses this
in more detail. However I don't think this is more relevant with the Async
IO and multiple DB Writers.
But this is a serious issue in older versions (oracle 7 and below) because
the number of Asnyc IO thread
I am sure its been said in the notes I have not read yet but, my
biggest reason for having the multiple files is to have multiple drives.
Each file on a different drive means that the access to the file can be
spread out. Therefore you can have multiple processes accessing the files
at the
recipients of list ORACLE-L
Subject: Re: # of datafiles per tablespace
no reason. I can see creating multiple files under those conditions
only because you want to keep files to a specific size.
Now, I did once find that the rollback datafiles were a bottleneck on a
system I had. So we
Well, I have a slightly different way of approaching file sizing. Here we
have Hatachi storage array's on a FIDI setup. We stripe several drives,
RAID, and get quite good performance.
I do NOT limit datafiles to any particular size (in production). Why?
Because I want to eliminate, as much as
Kevin, let me introduce you to the world of stripping. Course, if
you are on old hardware that really isn't like it is today.
-Original Message-
Sent: Tuesday, March 05, 2002 7:09 AM
To: Multiple recipients of list ORACLE-L
I am sure its been said in the notes I have not read yet
Sweet. Sounds like a solid setup to me.
-Original Message-
Robert
Sent: Tuesday, March 05, 2002 10:43 AM
To: Multiple recipients of list ORACLE-L
Well, I have a slightly different way of approaching file sizing. Here we
have Hatachi storage array's on a FIDI setup. We stripe several
On Tue, 5 Mar 2002, Kimberly Smith wrote:
Kevin, let me introduce you to the world of stripping. Course, if
you are on old hardware that really isn't like it is today.
I was unaware of the concurrent data access benefits of stipping. I
did know about certain things being spread out, but
Fair Point,
but isn't Async I/O limited to Raw Devices only: If not using raw, many
companies don't, you can still face contention issues.
Also enabling more dbwr processes gives the overhead of more background
processes: I feel that multiple files per tablespace is a workable
compromise.
--
OK, I know we had the debate already but lets have another go at it.
Say you got a tablespace, lets call it RBS and its for rollbacks.
Now, for what reason would you create a 500M file and 4 50M files
for this puppy as opposed to just one file. I just cannot see the reasoning
for this at all.
no reason. I can see creating multiple files under those conditions
only because you want to keep files to a specific size.
Now, I did once find that the rollback datafiles were a bottleneck on a
system I had. So we built TWO rollback tablespaces, with datafiles on
different mount points etc and
That being said is there anything wrong with having one 4G data
file for a tablespace. I personally cannot think of any. There
were the days when 2G was the limit but that sure isn't the case
anymore.
The only thing I can think of is for backups. However, I am always
going to backup on at
Other than I/O load balancing.. I can't see any other reason.
But again, why those tiny 50MB files?
Are these on the same disk? I hope not..
If there is no I/O bottleneck issues, I would build just one 700MB file.
And then monitor how it works out..
- Kirti
-Original Message-
In a UNIX system it is better to have more small size datafiles than a few
or one large datafile: The reason is that UNIX aquires an exclusive file
write lock and therefore if you use multiple files you will avoid a
situation where multiple simultaneous writes to data files become
serialized and
]
rizon.com cc:
Sent by: Subject: RE: # of datafiles per
tablespace
[EMAIL PROTECTED
Subject: RE: # of datafiles per
tablespace(Document link: Rajesh Rao
26 matches
Mail list logo