Friends,
I'd like to start a debate, which perhaps has already taken place, but
if so I don't recall it: Should we stop analyzing tables and indexes?
Let me clarify:
I've always told people that using the 'monitoring' option (alter table
X monitoring in 8i, plus alter index I monitoring in
Jared,
Point taken. I should do some testing instead of publish an opinion. I
still do not like the constraction, but that's a matter of
taste.
I have done some testing as well, because I think you were somehow
comparing apples and oranges: function a uses an implicit cursor, whereas
function b
Analyzing over and over again might make your system unstable, because
the optimizer each time might choose a different approach. But. If you
never update/delete/your data after the initial load including initial
analyze, performance will be consistent, and no surprises will hurt you.
Instead of
Hi Mogens,
Ok, fun topic! Here is my take:
1 - Frequency of re-analyze
- It astonishes me how many shops prohibit any un-approved production
changes and get re-analyze schema stats weekly, acting surprised when things
change!
- I agree, most shops do not have to do this, and I agree with Dave;
There are times when running a test harness
through a single pl/sql is going to give you
a spurious result because of extra pinning
(of data blocks and library cache material)
may confuse the issue.
Technically, if the implicit code and the explicit
code were written to do exactly the same
I have recently installed Standard Engine 9.2 on AIX 5.2 and notice that the
undotbs01.dbf file just keeps on growing. It is now over 1 GB.
What could be the reason for this? Can I limit it's size and would this
cause a problem too?
John
--
Please see the official ORACLE-L FAQ:
Mogens,
We've been in the same situation here where analyzing was turned off to stop
problems occurring (partly because of Oracle 7 and the fact that histograms
were created at a second stage so if the analyze failed part way through the
histograms were lost).
Although the data does not change
- Original Message -
I'd like to start a debate, which perhaps has already taken place, but
if so I don't recall it: Should we stop analyzing tables and indexes?
As a regular thing, yes. Unless there is a clear case for doing
it often: highly variable tables. And even then, I want
It's just like index rebuilding.
Too many people do it too aggressively, too often
and waste their time and the machine resources
doing it for very little benefit. But if you have the
time and resources, then it doesn't often do too
much damage.
However, there are cases where you really do need
Mogens, if you are looking for a poster boy ...
We analyze 9 production databases ... *every day*.
Raj
Rajendra dot Jamadagni at nospamespn dot com
All Views expressed in this email are strictly personal.
QOTD: Any
Title: Backing out of 9.2.0.4 upgrade
We are about to upgrade from Oracle 9.2.0.3 to 9.2.0.4.
Our applications have been tested on a 9204 test database without any problem
However, it would be nice to have a contingency plan in case we get problems after the upgrade. Obviously, a back up
On one of our OLTP databases (designed in the dark ages, made-for-rbo database
design), we have seen time and again that if we skip statistics collection for a day,
queries go to the town. So, reluctantly we have to analyze (a 10% keeps the
developer/CBO/Query trio happy).
Raj
I have you beat one schema in one of our databases (9.2.0.2) is
analyzed every 4 hours. Not mine, and I *will* be talking to the DBA
about his reasoning...
however Jonathan's point may well be the reason. This is an
ever-growing database, frequent insert and updates, and sequences are
used
I know you can hint a fast full scan. I have run into cases lately where depending on
circumstances Oracle will use an index, but use a sub-optimal type of index scan with
dramatic differences in performances.
This is on 9.2. Any hints for forcing an 'index range scan'. Anything stronger than
i know about the limit clause. I just want to keep someone else from bringing down an
instance.
I think Ill get a taser and fry the next person who does it. :)
From: zhu chao [EMAIL PROTECTED]
Date: 2003/12/29 Mon PM 10:34:24 EST
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
put in a between clause in where clause on appropriate columns for a range scan.
Raj
Rajendra dot Jamadagni at nospamespn dot com
All Views expressed in this email are strictly personal.
QOTD: Any clod can have
Mogens Nørgaard scribbled on the wall in glitter crayon:
I'd like to know what practical and philosofical ideas you guys have
on this topic.
i think a lot of this depends on the optimizer. i know the cost biased one
has improved dramatically since it was introduced. and i thought that the
however Jonathan's point may well be the reason. This is an
ever-growing database, frequent insert and updates, and sequences are
used throughout.
Analyze is estimate at least.
Or update HIVAL and DISTCNT and ROWCNT statistics using dbms_stats
regularly...
Tanel.
--
Please see the
You can have range scan with equality search (=) as well, if your index is
non-unique and there is no unique constraint on column.
Tanel.
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, December 30, 2003 3:24 PM
put in a between clause
we are using dbms_stats, gather auto, for all indexed columns and
estimate 15%
Now if my other DBA would just show up for work, I can ask him about
this. Sometimes being the early bird has disadvantages.
I do know that when the analyze is not done, we have performance
problems. Or at least the
Dear all,
II have set up MTS on my environment (oracle 8.1.6.0.0), but I cant
connect through remote computers. The error is ora-12545 : target or
host doesn't exists.
My init.ora regarding to MTS goes like this.
mts_dispatchers = (PROTOCOL=TCP)(dispatchers=10)(sessions=20)
mts_max_dispatchers
I strongly suspect I'm missing something here, but I don't see a problem
with gathering stale many times a day, every hour say. If your tables
aren't subject to much DML activity then they won't be analysed anyway.
On Tue, 2003-12-30 at 12:59, Rachel Carmichael wrote:
I have you beat one
We are using Oracle on OS/390 and WLM.
If you are using AIX instead of MVS you will have a different flavour of WLM.
Basically each of our databases on the mainframe runs within a service (think of
services on Windows NT). Each service is associated with a WLM class. Originally, we
capped each
That works. I prefer thumb presses, they worked
for the Inquisition and they lasted 500 years...
dr
Cheers
Nuno Souto
[EMAIL PROTECTED]
- Original Message -
I think Ill get a taser and fry the next person who does it. :)
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
Hehehe! You rat!
:D
Cheers
Nuno Souto
[EMAIL PROTECTED]
- Original Message -
Or update HIVAL and DISTCNT and ROWCNT statistics using dbms_stats
regularly...
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Nuno Souto
INET: [EMAIL PROTECTED]
Fat
Generally, 12545 means that something in the connection string for shared
environments is missing, the listener.ora, tnsnames.ora and sqlnet.ora
contain conflicting parameters, or the init.ora MTS parameters do not match
the listener.ora.
I would make sure that you can ping the server remotely
Hi Mauricio,
You cannot get the SQL statement for a select statement from the
normal audit trail. If you have 9i then you can use Fine Grained Audit
(FGA) to do this. There are some papers on normal audit on my site and
also some stuff on FGA if you are interested.
See my site at
I have
recently migrated our oracle 7.3.4 environment to oracle 9.2.0.4
I noticed
some batches eating up all my archive space. I have a 5 Gb filesystem solely
for archiving
available where I used to have 4Gb available for oracle 7 which was quite enough
for years.
A small test:
You forgot to squeeze the lemon juice on your monitor.
Henry
-Original Message-
Jared Still
Sent: Tuesday, December 30, 2003 1:35 AM
To: Multiple recipients of list ORACLE-L
Is is just me, or is the code missing?
On Mon, 2003-12-29 at 16:24, Pillai, Rajesh wrote:
Hi Jared,
Here
Makes sense, BUT...
If the data changes A LOT you should of course re-analyze. is assuming you
know when that happens. You are assuming communication between users,
developers, and DBAs. Communication is my New Year's Resolution.
I would at least suggest exporting stats before changing them.
I have a very strict SLA and I posted a question on asktom about the best way to get
the 'estimate' of rows and return it to the user. Im getting 'no data found'. anyone
have ideas? Im on 9.2, tables are analyzed, and Im in a DBA account.
my question is at the bottom.
No, you insert 1 rows to your table in 9i, but
only 6319 in 7.3.
Also, obj$ has probably more (filled)columns
in 9i compared to 7.3.
Redo structure has changed between these versions,
undo most likely as well. There are several other issues which might affect redo
size such is
This is slightly OT ...
Talking about exporting stats ... I do that and about 30 seconds ago finished writing
a SQL that looks at a history of exported stats and displays a 7 day pattern of
1. rowcount changes
2. average row length change
3. allocated blocks changes
basic treand analysis
Jeroen,
In your test, your obj$ table in 9i inserted 10,000 rows while your obj$
table in v7 only had 6319 rows. Test with another table or change your
rownum filter.
Mike
Jeroen van Sluisdam wrote:
I have recently migrated our oracle 7.3.4 environment to oracle 9.2.0.4
I noticed some
Title: RE: Should we stop analyzing?
I'll see your 'analyzed every 4 hours' and raise you one. We have some tables that are analyzed every time they are used! They are 'work' tables that are sometimes empty, very full, or somewhere in between. Running something when the statistics say the
Title: RE: SQL CASE Statement
I prefer the milk and candle method but that only works with flat panel displays. ;-)
Jerry Whittle
ASIFICS DBA
NCI Information Systems Inc.
[EMAIL PROTECTED]
618-622-4145
-Original Message-
From: Poras, Henry R. [SMTP:[EMAIL PROTECTED]
You
Title: RE: Should we stop analyzing?
In 9i you could use optimizer_dynamic_sampling for
such "work" tables
Tanel.
- Original Message -
From:
Whittle Jerome Contr NCI
To: Multiple recipients of list ORACLE-L
Sent: Tuesday, December 30, 2003 6:09
PM
Subject:
I fold :)
--- Whittle Jerome Contr NCI [EMAIL PROTECTED] wrote:
I'll see your 'analyzed every 4 hours' and raise you one. We have
some tables that are analyzed every time they are used! They are
'work' tables that are sometimes empty, very full, or somewhere in
between. Running something when
That's (partly) what the 9i dynamic sampling
feature is for. And such tables are, of course,
going to be GTTs.
Regards
Jonathan Lewis
http://www.jlcomp.demon.co.uk
The educated person is not the person
who can answer the questions, but the
person who can question the answers -- T.
Hi All,
I have two databases of small size running on a Win2k Server. One is
production and the second one test. I would like to delete the test
database in an automated way (run a script say every weekend), then
recreate it so that I have a fresh database to work with or to import
prod data
Now there's a thread from my heart. I have been saying and practicing
(where I'm allowed to as a outside contractor) that for years. I am dead
against regularly scheduled analyze jobs - it must be Sunday because the
analyze is running - but it is sometimes hard to convince the resident
DBAs of
Sorry, I forgot to mention I am using Oracle 9.2.0.1.0
-Original Message-
Sent: Tuesday, December 30, 2003 11:47 AM
To: '[EMAIL PROTECTED]'
Hi All,
I have two databases of small size running on a Win2k Server. One is
production and the second one test. I would like to delete the test
Hi,
One way would be
1) From your test database get a list of all
datafiles,redologs,controlfile locations.
2) Spool and store it in a file.
3) Shutdown the database and listeners
4) Use the file to create a script to physically remove the database
related files using OS commands.
Hope this
v$sql_plan_statistics (and consequently v$sql_plan_statistics_all) only
have data to show if statistics_level is set to ALL. You can set that at
the session level.
Has anyone done measurements on a busy system to evaluate what the impact
is of setting that system-wide. The impression I have is
Julio,
If you need to keep the userid's and other environmentals, I would
create a script that truncates the tables and then import the data to
populate them with the new data. The export can be controlled with a par
file and the import can be part of the truncate script.
Ron
[EMAIL PROTECTED]
Hi list,
I am trying to export a database(8.1.5) on Win 2000 remotely, but the export
terminated with the following errors, I was searching on google but didn't
find any point.
If any body have any idea I will be appreciated.
Here is the error:
ORA - 02248 ; invalid option for ALTER SESSION
to get dynamic sampling one must specify that as a hint .. right? can cbo use
dynamic sampling automatically on GTTs?
(Hey, it's new year time and some wishful thinking is in order).
Happy New Year.
Raj
Hi,
Any help on this would be appreciated.
RDBMS Version: 8.1.7.3.2
Operating System and Version: Windows 2000
Cpu_wait_time shows 0 in v$rsrc_consumer_group
We have implemented Oracle Resource manager in one database environment.
The CPU resource allocation is done in 4 levels.
SQL
I am surprised no one raised the issue of invalidations in the shared pool
caused by Stats gathering, and the parsing/reloading load that is caused
_after_ the extra I/O and changed plans due to ANALYZEs
I have this 250Gb Apps database that is analyzed once a month and we have
not suffered
there's also an optimizer_dynamic_sampling init parameter (in addition to
hint)
Tanel.
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, December 30, 2003 7:14 PM
to get dynamic sampling one must specify that as a hint .. right? can
There is a hint, and there is a parameter.
optimizer_dynamic_sampling = 2
is probably a good way of making sure
that all queries involving GTTs get a
dynamic sample of 32 blocks on the GTT
Regards
Jonathan Lewis
http://www.jlcomp.demon.co.uk
The educated person is not the person
who
Vaidya,
Wouldn't I have to worry about any registry info for the test instance
after physically deleting the OS db files? Would I be able to create
test using the same instance name?
Julio Cesar Quijada-Reina
Programmer Analyst
Computer Services at Alfred State College
-Original
Ron,
I like this idea. Although, I am not really concerned about userid's or
environmentals, truncating test tables and importing the new data in
looks like it would be faster than deleting the entire test database and
then recreating it. But how would I know what tables to truncate?
Thanks
Run oradim.exe from command line to see how to delete SID.
Deleting Oracle service via oradim _will_ remove corresponding
registry entries.
But why bother removing database, wouldn't dropping schema owner
with cascade option followed by full import do the trick?
Branimir
Wouldn't I have
Is there an easy way to track the rate of change in a particular table?
-Original Message-
Sent: Tuesday, December 30, 2003 7:24 AM
To: Multiple recipients of list ORACLE-L
Makes sense, BUT...
If the data changes A LOT you should of course re-analyze. is assuming you
know when that
There are times when running a test harness
through a single pl/sql is going to give you
a spurious result because of extra pinning
(of data blocks and library cache material)
may confuse the issue.
That isn't a factor, as I never use the results
from the first run for that very reason.
I found an error from my yesterdays post:
Basically, in 9i there are four ways of finding out how many rows will any
query return:
1) select from the query and count
2) use v$sql_plan_statistics column output_rows for already executed
queries
output_rows shows cumulative outrows statistics
im concerned about hitting the v$views in production. we have 30,000 users. its either
that or do counts. Its a requirement from the users.
not sure what to do. doesnt tom kyte do this on asktom?
From: Wolfgang Breitling [EMAIL PROTECTED]
Date: 2003/12/30 Tue PM 12:09:33 EST
To: Multiple
Don't be afraid to access v$ views, just beware of the bug that throws a ora-600 when
selecting 'filter_predicates' and 'access_predicates' under 9202. As a workaround,
don't select those two columns. If I were you, I'd make sure that users are *very*
clear that the number you are going to get
select count(*) on the PK each day and store the results for tracking.
monitor the extent usage for the table.
audit the table.
[EMAIL PROTECTED] 12/30/2003 12:49:33 PM
Is there an easy way to track the rate of change in a particular
table?
-Original Message-
Sent: Tuesday, December
Branimir,
Correct me if I am wrong, but if I used your approach of dropping schema
owner then if I have 25 schemas on my test db, I would have to drop ALL
of them? I would think that dropping ALL schemas would equal removing
entire database.
Julio Cesar Quijada-Reina
Programmer Analyst
Computer
Julio,
You can get a list if tables from the test database and compare it
with a list of tables from the production database. Just select
table_name from dba_tables where owner not = system or sys and that will
give you all of the application tables in the database.
Ron
[EMAIL PROTECTED]
All,
We have a AIX 5.2 box serving a new 9.2 database. We also installed the OID
software on this box. Oid requires it's own database. So we have two
instances on this machine.
We received notice from Oracle that a security patch (#62) was required. We
were applying the patch and it failed
Ryan,
I asked Tom that very question a while ago, here:
http://asktom.oracle.com/pls/ask/f?p=4950:8:8900576360328284797::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:3489618933902,
The short answer is that he's using Intermedia for his searching, which has
the 'ctx_query.count_hits' functionality.
I wasn't thinking of the boundary conditions,
I was thinking of the totally different mechanisms
that appear because you are running pl/sql rather
than (say) a loop in Pro*C that sends a pure
SQL statement 1,000 times to the database.
Regards
Jonathan Lewis
http://www.jlcomp.demon.co.uk
The
Thanks Jonathan,Tanel
Some more clarification ... is dynamic sampling automatically used or one must specify
the hint?
Raj
Rajendra dot Jamadagni at nospamespn dot com
All Views expressed in this email are
I would opt to separate them.
I ran into some problems with a Collaboration Suite install, where I wanted to use the
OID database, to store my files data as well. It failed in a spectacular fashion. An
Oracle support analyst said that was a bad idea. When I asked him if I could just
anyone have a better way to do this? im going to post what you said wolfgang on asktom
and see what he has to say.
From: Wolfgang Breitling [EMAIL PROTECTED]
Date: 2003/12/30 Tue PM 12:09:33 EST
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Subject: Re: help with estimate row
i could have swarn i read in multiple places that in a high transaction system hitting
v$views repeatedly kills performance? causes excessive latching?
ill have to test it to see if this is better than a count. Gonna be ugly either way.
From: Jamadagni, Rajendra [EMAIL PROTECTED]
Date:
Dear all, I have a script to generate constraints for a single table
but I need a script to generate constraints for a schema owner . Can
anyone send me a copy?
Many thanks,
_
Free email with personality! Over 200 domains!
Interesting! Could this account for LOADS1 on pinned objects?
Damn. Almost got thru the rest of the year without learning anything new.
:)
Rich
Rich Jesse System/Database Administrator
[EMAIL PROTECTED] Quad/Tech Inc, Sussex, WI USA
-Original
run the same script for every table for the schema owner and spool everything to the
same file ... there you have it.
Raj
Rajendra dot Jamadagni at nospamespn dot com
All Views expressed in this email are strictly
Go to tahiti.oracle.com and search for the optimizer_dynamic_sampling
parameter, you'll see descriptions for it's different values there.
Tanel.
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Tuesday, December 30, 2003 8:59 PM
Thanks
Tom,
I see no problem that would prevent you from having multiple instances
in the Oracle home. You would need multiple Oracle homes if you had
different version of Oracle. With one Oracle home and multiple instances
you have to be sure to include all instances in the shutdown and start
up
Brian,
From reading into your message I get the impression the you wanted to
use the OID supplied database for your other database and support
frowned upon it. Was there any problem with using the normal Oracle
database for the OID database?
Ron
[EMAIL PROTECTED] 12/30/2003 1:59:26 PM
I would
Hi!
Statistics level ALL means TYPICAL + row source execution stats +
timed_os_statistics.
If you want to switch to ALL for performance reasons, you can switch only
row source stats on with parameter setting _rowsource_execution_statistics
to true (on session level).
But I doubt it'll help in
dba_constraint will inform you of the tables that have constraints and
what type of constraint they are.
Further digging into the dba_ tables will provide the information you
desire. Keep the scripts as part of the database documentation and
update when needed.
Third party software can provide
That's right - you would have to drop all schema owners. In my
opinion it is simpler and easier task to automate dropping of
all owners followed by one full import compared to task of
automating database deletions followed by database creations
then doing full import in very last step.
DOS
At 09:49 30-12-03 -0800, you wrote:
There are times when running a test harness
through a single pl/sql is going to give you
a spurious result because of extra pinning
(of data blocks and library cache material)
may confuse the issue.
That isn't a factor, as I never
use the results
from
As far as I can understand your question you are copying your production
environment to test. So, test should be a copy, and not an export/import
logical represantation of prod. Otherwise your tables/indexes will be
reorganized every time you create the new test database. This means
Hi,
First of all, thank you to all answered my last question.
Now I have another question related to my last one.
In my system, pga_aggregate_target is set to 3GB and I
think a session would have approximately 150MB work area
before temp space is needed (5% of 3GB).
But I did a test, it only used
Thanks Ron, I got this recreate constraints script from our list but
lost it.It was really good script and it can re-generate all the
constraints under a schema owner.
--
Original Message
Date: Tue, 30 Dec 2003 12:14:26 -0800
dba_constraint will
Thanks Rajendra, Good idea but I have 1200 tables :(I got a good
script from our list long time ago but lost it.That script can capiture
constraints for the schema owner.
--
Original Message
Date: Tue, 30 Dec 2003 11:39:26 -0800
run the
I'll agree that you can run multiple instance out of 1 oracle home, however, I've
found it to be
simpler to use a dedicated oracle home per instance. Disk space is cheap (relatively
speeking).
Ron Thomas
Hypercom, Inc
[EMAIL PROTECTED]
Each new user of a new system uncovers a new class of
90 Meg was all it needed?
Roger Xu [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/30/2003 12:59 PM
Please respond to ORACLE-L
To:Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
cc:
Subject:max 5% of pga_aggregate_target for a single serial session
Hi Listers,
I have a question about rman restore. Right now, I configured RETENTION
POLICY TO REDUNDANCY=1 and deleted the obsolete backupset on the disk
after a new rman full backup is done. The old backupset will be
backup-ed to tape by system group. In case of the newly backupset on
disk is
If you have one to generate the constraints for a table, just modify
it slightly to include a whole schema.
Just checked, I don't have such a beastie.
Check orafaq.com, likely there is one there.
Jared
system manager [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/30/2003 11:24 AM
Tom,
We have a small separate box housing OID(9204),
OEM,RMAN instances. Since we are using OID for service
Naming, I understand Oracle has no licence fee since
it is used for servicing other Oracle Databases across
the enterprise. We also use OID replication(another
node located at the DR site),
For 9i DBs, DBMS_METADATA will (re)create DDL for every (at least most)
object in the DB.
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author:
INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California--
no. it
used 800 MB of tempspace in the end.
(also
see the tempsize column output from the query of the v$sql_workarea_active
view)
-Original Message-From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]Sent: Tuesday, December 30, 2003
3:35 PMTo: Multiple recipients of list
I think you can do CONFIGURE CONTROLFILE AUTOBACKUP ON,
which enables RMAN to automatically backup controlfile to
a default location. Then you can restore the controlfile
before you restore other database files.
-Original Message-
Sent: Tuesday, December 30, 2003 3:34 PM
To: Multiple
Joan - Which Oracle version?
Dennis Williams
DBA
Lifetouch, Inc.
[EMAIL PROTECTED]
-Original Message-
Sent: Tuesday, December 30, 2003 3:34 PM
To: Multiple recipients of list ORACLE-L
Hi Listers,
I have a question about rman restore. Right now, I configured RETENTION
POLICY TO
Tanel,
I know the values, you are missing my question ... let me re-phrase it ...
1. To have CBO use dynamic sampling do you have to specify the hint?
or
2. CBO will do that automatically?
Just to let you know, Oracle 9ir2 docs main page is my home page on Mozilla firebird
browser and Metalink
Hi
The version is Oracle 8i, so
is there a way to retrieve the sql statement?
could the v$sql view or v$sqltext view be useful?
regards
Arup Nanda [EMAIL PROTECTED] wrote:
You haven't specified the Oracle version. If it's 9i, you could use Fine Grained Auditing (FGA) to get the exact SQLs.
Mogens,
Quite a controversy you started here.
As always. ;)
I must admit this is the first time I've heard this come up.
As Jonathan stated, it does seem somewhat like rebuilding indexes.
But then again, if re-collecting statistics causes your database performance
to suddenly become very
I had a similar situation once working in a data warehouse environment.
One example is a job that recreated a large dimension table each night:
The dimension table was truncated and reconstructed in phases - this was by
far the most efficient approach. It was necessary to analyze the table
Your global memory bound statistic from v$pgastat says that max work area
size is 100M. Maybe this 5% rule doesn't apply with large
pga_aggregate_targets. The documentation claims that this value can be
adjusted during db workload, so you might want to try to run your operation
several times in a
Hi Ron and Brian:
Wehave been running the OID database in the same ORACLE_HOME as a
production database since Nov 2002.I amrunning both 8.1.7 on
AIX 4.33 and 9.2.0.4 on AIX 5.1.I feel using the same ORACLE_HOME or
separate ORACLE_HOMES depends on how you plan to use OID. On the servers
Dennis,
9.2.0.4
Joan
Quoting DENNIS WILLIAMS [EMAIL PROTECTED]:
Joan - Which Oracle version?
Dennis Williams
DBA
Lifetouch, Inc.
[EMAIL PROTECTED]
-Original Message-
Sent: Tuesday, December 30, 2003 3:34 PM
To: Multiple recipients of list ORACLE-L
Hi Listers,
I
The CBO will do dynamic sampling automatically provided the conditions are
met. The conditions that need to be met depend on the dynamic_sampling
initialization parameter in effect for the session. The default is 1 which
practically disables dynamic sampling. 0 will totally disable it but IMHO
1 - 100 of 126 matches
Mail list logo