Wolfgang,
First off, sorry for mangling your name in the previous post.
I too will make notes inline.
On Tue, 2003-12-30 at 22:14, Wolfgang Breitling wrote:
Note inline
At 10:29 PM 12/30/2003, you wrote:
If my data changes, and I analyze it, CBO should still find
reasonable execution
Dear All,
I am faced with the following situation.
Oracle 8.1.7.4. 64 bit , Solaris 8
There is a loader java process that when is executed
against a test database(dwdsa)the response time is as expected to be. However
when it is executed against the production instance (dwods) it is
FYI, you will need to load several other modules.
Make sure you are using Perl 5.8.1+
Add the following line to Oracle/Trace.pm in the source dir:
use FileHandle:
This as of Oracle::Trace 1.06. Haven't actually use it yet.
Jared
On Tue, 2003-12-30 at 18:19, Michael Thomas wrote:
Hi,
I
I didn't even notice.
As for the rest of your rebuttal. I am not a religious fanatic. If it works
for you, great. Just be aware of the risk involved and backup the
statistics before analyzing them so that you can restore them in case
things go sour after the analyze.
I had one case for
Hi,
Thanks, very much. I found about the same things. I'm
curious if you got any farther than me?
I'm on Cygwin-W2K, and I've tried plenty of tricks.
e.g. ...
(1) an old version of Data-Dumper-2.101.tar.gz (not
even sure I needed it), and
(2) vi Oracle/Trace.pm
add line 12, use FileHandle;
John,
These are just a couple of ideas coming to me (I haven't checked the attachments,
answering to this through a web interface).
First of all, having a _whole_ process much slowed by parsing proves, if nothing else,
that you are doing too much of it. If it happened very few times you
Ouch
stupid mistake (time to take a few days off)
Redo
the exercise with the correct number of rows lead me to an increase of about 9
percent
which looks reasonable with your arguments in consideration as well.
Thanks,
Jeroen
-Oorspronkelijk
bericht-
Van: Tanel Poder
Thanks Jared
You should probably investigate why it continues to grow so large
Whats the best way to go about identifying any large transactions?
John
-Original Message-
Sent: 31 December 2003 04:34
To: Multiple recipients of list ORACLE-L
The data file(s) for your undo tablespace
TOAD has a 'Generate Schema script' function and if I recall correctly
you can specify the objects that you want the script to include. You can
download an evaluation copy at
http://www.quest.com/solutions/download.asp
On Tue, 2003-12-30 at 19:24, system manager wrote:
Dear all, I have a
Sorry, the last email got truncated for some reason.
Here is the rest:
-
Now I'm stuck at 'make test', my best results are:
$ make test
/usr/bin/perl.exe -MExtUtils::Command::MM -e
test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/TraceName main::a_ftr used only once: possible
typo at
Thanks, I found it from the materials.
Tanel.
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Wednesday, December 31, 2003 8:19 AM
Chapter 9, page 33 - Cat-Hash-strophes
in the seminar notes. (The page number
may have changed a little).
If
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Tony Miller
INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
I was actually talking from database recovery point of view (that you can do
point in time recovery prior to current controlfile time if you use using
backup controlfile option when recovering.
For restoring a de-registered backupset, I see two options (there might be
more, more convenient ones):
Thanks for the answer Stephen,
I can ping and tnsping the server, in fact my connection using dedicated
server is working. But when I add initialization parameter for MTS in
init.ora like I have mentioned before, the connections are always fail,
looks like the listener cant redirect connection
Paul, Ron, Ravi Brian,
Thanks for the replies. I should have
known that keeping them separate was the smart thing to do. We've tried
twice now to apply patch 62 (a security notice) to the 9.2.0.4 software.
It's failed both times. The first with Java errors, and the second with
something
Title: Message
Hello,
I have
been asked to setup a test environment forRAC.However, I don't know
so much about hardware.
My
questions may appear dumb, please take no offence, I'm a beginning DBA and I
really want to know.
1. Can
I use 2 nodes of different makes (one IBM and one
-Original Message-From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]Sent: Tuesday, December 30, 2003
5:29 PMTo: Multiple recipients of list ORACLE-LSubject:
Re: Should we stop analyzing?
If just 2, then from a users
perspective, it would seem most appropriate to have
Wendry,
My experience with MTS is that, by default, the listener uses an MTS
connection. I wonder why you are not getting that.
My init.ora file looks like this:
mts_dispatchers =
(ADDRESS=(PROTOCOL=TCP)(HOST=ip_address))(DISPATCHERS=5)
local_listener =
Carel-Jan,
Thanks for your insight in the difference between export/import and
copying databases. Two factors had me initially thinking of doing
export/import: 1) The tables in production are not big and 2) tables are
not subject to heavy changes. As it was pointed out before and although
I am
Thanks Tanel. I will test it out.
Happy new year!
Joan
Tanel Poder wrote:
I was actually talking from database recovery point of view (that you can do
point in time recovery prior to current controlfile time if you use using
backup controlfile option when recovering.
For restoring a
I read the little blurb in the 9i new features on it. The example there doesnt seem
very useful. What have people used it for?
any good articles with good examples on this?
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: [EMAIL PROTECTED]
INET: [EMAIL PROTECTED]
Wolfgang,
I don't have 9i available at the moment so I can't test this. Just wondering if
a 10053 trace shows you if the statistics it is using are gathered from dynamic
sampling.
Henry
-Original Message-
Wolfgang Breitling
Sent: Tuesday, December 30, 2003 6:24 PM
To: Multiple
Ryan,
I use it extensively ... for some of the utilities I wrote for application support ...
here is one sample ...
This utility shows errors in the pl/sql code, what's different is not only it shows
the errors, but also shows the source lines and exact location of the error.
To test, install
Title: Message
Yes, you can use nodes from different OEM vendors for RAC.
You will for sure need a private network or interconnect between
the nodes for maintaining the heart-beat.
HTH
Chandra
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL
Jared,
One problem is that the CBO sometimes CAN'T come up with the optimal
execution plan. This could happen because it doesn't have all of the
necessary data (i.e. histograms). There are also some types of data
distribution that it ignores (see Wolfgang's paper at
http://www.centrexcc.com/
Yes, it does say when dynamic sampling is evaluated or executed.
1st example is where dyn sampling is used:
-
SINGLE TABLE ACCESS PATH
*** 2003-12-31 17:25:07.521
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 4).
I have a TAR open on this and Im arguing with the Oracle tech support guy.
Here is what happened. We upgraded an instance to 9i. Switched to automatic undo
management. Set our undo parameters to point to a newly created undo tablespace.
1. took our old rollback tablespace(with rollback
Tha author says to use vs. 1.07. CPAN however
only has vs. 1.06.
I've asked the author how to obtain 1.07, haven't
heard yet.
Jared
On Wed, 2003-12-31 at 02:14, Michael Thomas wrote:
Sorry, the last email got truncated for some reason.
Here is the rest:
-
Now I'm stuck at 'make
Sorry, you can't send binary attachments to this list.
Jared
On Tue, 2003-12-30 at 23:34, Hatzistavrou John wrote:
Dear All,
I am faced with the following situation.
Oracle 8.1.7.4. 64 bit , Solaris 8
There is a loader java process that when is executed against a test
I am reviewing segment statistics taken in a 15 min
snapshot (from stats$seg_stat) for a specific table,
which resides in 2 datafiles. The stats for datafiles
(stats$filestatxs) are as follows:
file# blocks read physical_reads
- --- --
11 8,171960
12
Whats the best way to go about identifying any large transactions?
Ask the developers and users.
As for the size of the UNDO TBS, check and modify your
retention times as suggested by Anjo, and control the
autoextending of the datafiles.
Jared
On Wed, 2003-12-31 at 01:39, John Dunn wrote:
Yes, it does.
extract from 10053 trace:
** Executed dynamic sampling query:
level : 2
sample pct. : 11.151079
actual sample size : 2601
filtered sample card. : 2601
orig. card. : 11321
block cnt. : 278
max. sample block cnt. : 32
sample block cnt. : 31
ndv C3
select BELNR,count(*)
from sapr3.bsis
group by BELNR
order by BELNR
This was the SQL running at that time.
-Original Message-
Sent: Tuesday, December 30, 2003 5:44 PM
To: Multiple recipients of list ORACLE-L
It is possible for a single session to require more
than one sort or hash
Title: Message
Thanks
a lot Chandra. Have a lovely New year
-Original Message-From: Chandra Pabba
[mailto:[EMAIL PROTECTED]Sent: 31 December 2003
15:04To: Multiple recipients of list ORACLE-LSubject:
RE: Hardware for RAC?
Yes, you can use nodes from
different
I recently rewrote a poor-performing data load procedure (with single row
inserts, commit batches of 2000) to a pipelined table function, which
enabled insert /*+ append */ into the target table, which greatly enhanced
performance. The original routine contained an embedded select, a second
Hi Tanel-
I've been doing 11.5 cloning for several years and think I've found most (but I'm sure
not all) of
the database and filesystem changes required.
adpatch is what concerns me the most since it requires a database connection to
function. It can't
connect to the DR database since it
Thanks. It's amazing how you did that. ;-)
The last two updates on CPAN were both on Wednesday,
e.g. 17DEC03, 24DEC03. And, today is Wednesday. So
anyone that depends on BCHR over 99% might expect that
the next release would be ... today! :-O
I stumbled into the Oracle-Trace package while
Title: Message
Chinedu/Chandra,
Although I agree that you can use different OEM vendors as long as the OS
is the same, be aware of the increased chances for some cross-vendor problems.
You necessarily don't want finger-pointing between vendors when problems occur
(they will!) in a complex
Hi!
I've been doing 11.5 cloning for several years and think I've found most
(but I'm sure not all) of
the database and filesystem changes required.
Yep, in older 11.5's you had to manually modify conf files and profile
options, but starting from 11.5.7 you have autoconfig with installation,
Hi again-
Why do you want to patch anything at disaster recovery site at all?
When you apply any d-patches in primary, their changes will be transferred
to secondary via redologs.
When you apply any cg-patches in primary, you just use rsync to synchronize
all changes to secondary site file
WAIT #31: nam='SQL*Net message to client' ela= 1
p1=1413697536 p2=1 p3=0
WAIT #31: nam='SQL*Net message from client' ela= 692 p1=1413697536 p2=1
p3=0
WAIT #31: nam='SQL*Net message to client' ela= 1 p1=1413697536 p2=1
p3=0 FETCH
Fantastic results Adam.
You didn't perhaps do interim testing did you, so that you
know how much of the benefit was due to the pipelined functions?
You made quite a few changes, and a breakdown of the
the benefits of each would be interesting to see.
Jared
[EMAIL PROTECTED]
Sent by:
Google'izing, you can find 1.07 here:
http://cpan.oss.eznetsols.org/authors/id/R/RF/RFOLEY/Oracle-Trace-1.07.tar.gz
My best results with 'make test' are now:
$ make test
/usr/bin/perl.exe -MExtUtils::Command::MM -e
test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/Traceok 1/8
Name main::a_ftr
At the time, I did: I used simple sql_tracing for much of the analysis,
and definitely analyzed in stages. Unfortunately, most of the trace data
was lost. I have a couple of the files, from which I started with 10,000
row inserts (with commit batches of 2000) vs. 10,000 directly appended
In the interests of documentation, and if I have time, I could engineer a
similar 'dumb' procedure, perform trace as each modification is made, and
post the results here. It's pretty easy to come up with an artificial
routine, though, to do this kind of analysis oneself. Use Tom Kyte's
That would be cool if you have time for it.
Re the sequence: is it assigned in a trigger, or directly in the SQL?
[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/31/2003 11:19 AM
Please respond to ORACLE-L
To:Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
cc:
Cool, thanks. Hadn't tried googling for it yet.
Michael Thomas [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/31/2003 10:49 AM
Please respond to ORACLE-L
To:Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
cc:
Subject:Re: Has anyone used this Perl
great response. questions inline.
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Wednesday, December 31, 2003 2:14 PM
At the time, I did: I used simple sql_tracing for much of the analysis,
and definitely analyzed in stages. Unfortunately, most
Mogens
Friends,
I'd like to start a debate, which perhaps has already taken
place, but
if so I don't recall it: Should we stop analyzing tables and indexes?
Let me clarify:
I've always told people that using the 'monitoring' option
(alter table
X monitoring in 8i, plus alter
Title: Message
This
is most likely a sequence that is incremented from multiple nodes. When a range
runs out, a node has to allocate a new range. other nodes have to
flush/invalidate their sequence number tow cache entry.
Anjo.
-Original Message-From:
[EMAIL PROTECTED]
After I did some testing, it is impossble to restore and recover a
deleted obsolete backupset. So I took off the delete obsolete command.
Retention policy to redundancy still keep it to 1. I did couple backups
and run list backup of database and report obsolete command.
Although report obsolet
Directly in the SQL. We use Designer TAPI autosequence generation for
day-to-day operations, but triggers slow down inserts and of course can't
be enabled for direct path inserts.
Adam
[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/31/2003 11:29 AM
Please respond to
[EMAIL PROTECTED]
My responses below are below
Ryan [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/31/2003 11:54 AM
Please respond to
[EMAIL PROTECTED]
To
Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
cc
Subject
Re: anyone use pipelined functions?
great response. questions inline.
-
My responses below are below -- sigh, it's been a long day. lol
[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/31/2003 01:59 PM
Please respond to
[EMAIL PROTECTED]
To
Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
cc
Subject
Re: anyone use pipelined functions?
My responses
It is now officially 2004 here so:
HAPPY NEW YEAR all.
Yechiel Adar
Mehish
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Yechiel Adar
INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California--
Ashish,
Are you using NOCACHE/ORDER for your sequences ? If
so, is the cache size for your sequences default (20)?
Try increasing the cache size to see if the misses
reduce.
If these sequences a columns of an Index, it would
affect the cache fusion latencies (hence performance)
since the column
One minor caveat about setting timed_os_statistics. On Solaris, if you set
timed_os_statistics to non-zero, microstate accounting at the OS level is
enabled for the server process. Common practice is to leave it off for
performance reason. But I've never seen experimental data proving the negative
Hi Richard,
Did you test the effect of Nocache after caching ?
What we noticed is cache followed by nocache is not
making the blocks to be flushed out. This has been
that way for months now in a production database of
ours.
Thx,
Ravi.
--- Richard Foote [EMAIL PROTECTED] wrote:
Hi
It
Well, that's not really a surprise, is it? If you do CACHE first, and
cache all the tables blocks, then do NOCACHE, Oracle isn't going to
immediately explicitly flush those blocks. I'd expect that as demand
on the buffer cache increased, the blocks would age out. Oracle almost always
follows
59 matches
Mail list logo