And who cares at all if you're not waiting for events associated with this?
Big or small, few or many - it doesn't matter if you're not waiting for them.
Long live the wait interface, which can focus our attention on the bottleneck
(as they say at the breweries).
Christopher Spence wrote:
I
I beg to differ with this persons assessment.
Number of datafiles hardly effect performance in any noticable manor. This
can be noted during checkpoint where high count of data files check point
with similar efficiency of low number.
Granted, having 1000 128k data files wouldn't be efficient.
I beg to differ with this persons assessment.
Number of datafiles hardly effect performance in any noticable
manor. This
can be noted during checkpoint where high count of data files
check point
with similar efficiency of low number.
Christopher,
It is not *only* the number of datafiles
I agree 100% in what your saying, but I think saying It is really bad to
drop tables randomly from production environment would be a very similar
assesmment. If your doing a checkpoint every 15 seconds with 1,000,000
datafiles over 3,500 tablespaces with 10k redo logs, well perhaps the
problem
All,
Awhile back, on the other list, there was a discussion regarding the
number of data files that an Oracle database might contain.
Someone posted a warning that too many data files might actually hurt
performance in that it might inflate the amount of work to be done at SCN
update time.
Hi !
Someone posted a warning that too many data files
might actually hurt
performance in that it might inflate the amount of
work to be done at SCN
update time.
How would I test this? What wait value might be
too high to indicate that
this is taking too long?
How many is too many?