This event is making a come back. Single task linking was a way in V5 and V6
(and V7) to make applications run very fast. It was no longer supported by
oracle. However, now withe the Context option (or what ever it is called
today) it is back. single task message = SQL*net message from client.
Your problem is probably the large number of parses that seem to be happening.
Also the stats have no meaning here if you don't tell us over what time period
they have been collected.
Anjo.
On Thursday 26 December 2002 12:39, Arun Chakrapanirao wrote:
Hi,
Has any enabled
yes, row migration will
degrade the performance..
- Original Message -
From:
Larry Elkins
To: Multiple recipients of list ORACLE-L
Sent: Friday, December 27, 2002 5:38
AM
Subject: Row Migration
Listers,8.1.7.4 64 Bit SolarisDoes row
migration utilize DB
I'm sure many of you have scripts to recreate an Oracle schema including
objects (i am interested in tables, indexes , comments, views,
sequences,
triggers, stored procs/functions etc..)
exp userid=system/manager file=schema.dmp rows=n owner=scott
vi schema.dmp
Instead of vi schema.dmp
Shaleen
Def.Rights:
Roles can be enabled or disabled -- an unit must not be dependent
on the enabled/disabled roles. There is nothing bad to have such
design. This design is well thought, IMHO. At least at it's [was]
consistent [on the moment of its invention].
Inv.right
Due to the context
It would appear we're looking into yet another hit ratio, namely the ASS
Hit Ratio. Used to be rather high in my younger days.
Mogens
Jonathan Lewis wrote:
Depending on your circumstances, ASS Management
can eliminate severe contention on the freelists / freelist
groups area. However,
Title: Message
let us
suppose there are two tablesM and P.
bothContain the
fieldemp_id. other columnsmay bedifferent.
All recordsof M
also Exist in P .Table M will haverecords in the range 1-5
lakhs.
P
table will containAdditional Records such
that the Total Number of Records in P
Title: Message
Not a
good idea to store rowid in table M. If you ever move table P to a
different tablespace
or
within the same tablespace, all it's rowid would change.
Richard Ji
-Original Message-From: VIVEK_SHARMA
[mailto:[EMAIL PROTECTED]]Sent: Friday, December 27, 2002
Vivek,
Bad, bad, bad idea. You can play with rowids in your programs - as long as you
consider them to be transient values (get it/use it). Don't forget that they are
physical addresses (BTW, DBMS were invented in the first place to hide the physical
implementation from programs). Any
Someone asked in a back channel email if parallelism is used. The select
portion of the update statement uses parallelism (though the updates
themselves get serialized) through the use of an in-line join update (to
avoid the second sub-query commonly used to constrain the rows being
updated):
Yes. And bvi for binary files.
-Original Message-
Sent: Thursday, December 26, 2002 2:04 AM
To: Multiple recipients of list ORACLE-L
exp userid=system/manager file=schema.dmp rows=n owner=scott
vi schema.dmp
really.
Jared
On Wednesday 25 December 2002 09:53, Andrey Bronfin wrote:
Well,
yes, Iwould agree with that ;-)
What
we are trying to determine here in this particular case is how much or what
percentage of the slowdown in the process is due to the migration of rows. We
aren't ready (until we do some testing) tomake a blanket statement that
row migration
Row migration means extra IO's. If IO is taking up any significant part of
your response time, then you don't want extra IO, of course. And the IO will
be single-block IO (sequential reads) because a stub is left in the originating
block pointing to the new block where the row migrates to -
Title: RE: Cache on sysdate? --From 9i performance planning manual
Thanks
Raj. That's very cool. Now I can do:
SQL delete from dual;
1 row
deleted.
SQLdeclare 2 a
date :=sysdate; 3 begin 4
dbms_output.put_line(to_char(a,'MMDD HH24:MI:SS')); 5*
end;
20021227 05:36:54
PL/SQL
Here's a reason:
have you ever tried to find the three duplicate rows in a 12 million
row table without using the primary key constraint? I've had to disable
or drop the constraint in order to use the exceptions table. Once I do
that, even if I've built a separate index that enforces the primary
They don't do a great job of monitoring as all they record is the fact
that someone logged in. But then the other auditing Oracle does (or did
in earlier versions, I haven't investigated it in 9i) didn't capture
much information either.
Since we used to automate, via cron, some of the
O Oracle Guru's
Please tell us, why _trace_files_public is *STILL* an underscore
parameter??
Raj
__
Rajendra
Jamadagni
MIS, ESPN Inc.
Rajendra dot Jamadagni at ESPN dot
com
Any opinion expressed here is
personal and doesn't reflect
Title: RE: Automatic backup on Oracle 9i
To me Automatic Backup means the backup jobs/scripts written by resident script kiddies (AKA Unix Admins).
g
Raj
__
Rajendra Jamadagni MIS, ESPN Inc.
Rajendra dot Jamadagni at ESPN dot com
Any
If you know you have 3 duplicate records in the table then the PK must have
already been disabled so you have to rebuild anyway. I do not see
where you had to disable in order to use the exception table. It was
already disabled therefore it probably not an app problem but a disable
constraint
What do you get when run this on the server hosting Oracle:
lsnrctl services
Waleed, thanks for your input. Here is what I have (below are my MTS
settings)
MYDB has 1 service handler(s)
DEDICATED SERVER established:259 refused:1
LOCAL SERVER
MYDB
Hey Rachel,
Consider using a non-unique index for your primary key constraint. If
you prebuild it and then add the constraint, Oracle will not drop the
index when you drop the PK constraint, and you can control the index
build that a way (and build it in parallel to boot).
hth,
Jack
Hi,
I want to version the Database for development, IT, QA and staging
environment.
Can some one suggest different methods and best possible approach to
maintain the database.
Database is in design stage development has partially started. We
are using MKS for
If this is rman backup, perhaps try granting sysdba to sys, or connecting to
target as sysdba?
-Original Message-
From: Sony kristanto [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 26, 2002 7:24 PM
To: Multiple recipients of list ORACLE-L
Subject: Automatic backup on Oracle
You are right. I disabled the roles thru which the grants were made. But
my schema owns the objects. I have a userprivs script that shows my schema
owners privs. The schema owner does not have any privs on the objects it
owns! How can that be? And I tried granting to myself(the schema owner),
Title: Rebuilding Indexes...
Thanks
for the responses from all the great minds on this list! :)
-Original Message-From: Richard Huntley
[mailto:[EMAIL PROTECTED]]Sent: Thursday, December 26, 2002
3:24 PMTo: Multiple recipients of list ORACLE-LSubject:
Rebuilding Indexes...
Anyone
You can use the rowid but do not keep it.
As a dev DBA I would not allow to store the rowid in a
table because its value is meaningless once you
export/import, ...
--- VIVEK_SHARMA [EMAIL PROTECTED] a écrit :
let us suppose there are two tables M and P.
both Contain the field emp_id. other
David,
If the package is not too large, could you please show it (or the portions
of the package that are involved in the error) to us on the list so we can
see exactly what is going on?
We need to see where and how the object is being referenced.
thanks
Tom Mercadante
Oracle Certified
Try:
1) Force shared connections using (SRVR=SHARED) in the tnsnames.ora.
2) Change the service name for the MTS_service and restart te db and
listener. Make sure the service is registered with the listener. Add a new
entry pointing to the new service in tnsnames.ora and let you app use this
Versioning the database ?
Take a backup of the database on a seperate tape each day !
What components of the database do you want to version ? Table definitions ?
View definitions ? Packages/Procedures/Triggers ?
Code Objects should be versioned, but data objects [Tables/Indexes/Sequences]
Thanks. Guess its clean-up job time.
-Original Message-
Sent: Thursday, December 26, 2002 7:59 PM
To: Multiple recipients of list ORACLE-L
IIRC, these files are generated whenever someone logs in as sysdba or
internal. I don't know of any way to stop them.
--- Kevin Lange [EMAIL
Hi,
Oracle 8.1.6 on NT 4.0
Oracle.exe is running at about 85% CPU utilization. What can I check to
see why that is the case?
Thanks
Rick
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author:
INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051
Developers can also use the approach that Oracle uses with
UROWID values, which are stored in secondary indexes on IOTs
(i.e. replacing ROWIDs used in normal indexes).
Store the ROWID as well as the PK/UK column values. Use the
following algorithm to retrieve in future:
1. Retrieve the PK/UK
Title: Message
Tehe, don't worry, Bob, the
developers here work for me, so I can be as un-diplomatic as I wanna be.
I don't know how you would do it in
Micro$oft; perhaps some kind of component (.NET? DCOM?) could do this for them.
I can do it in Java and Perl. Can't
imagine that
that's what I do Kevin. I have a cron job that cleans up all of the Oracle
log files. These audit files, Listener logs, Alert Logs, Trace files etc.
I run it twice a month, deleting anything that is 30 days or older. rename
alert logs and listener logs, rman's sbtio.log file so that they will
If you build a separate index to enforce the primary key, Oracle shouldn't
drop it when you disable or drop the primary key.
Regards,
Denny
Quoting Rachel Carmichael [EMAIL PROTECTED]:
Here's a reason:
have you ever tried to find the three duplicate rows in a 12 million
row table
And finally, although I hate asking the question: Why are you running MTS
in the first place? I'm not saying there aren't good reasons for it - I'm
just curious. Or to be "funny": I've solved many MTS-problems in my time
by turning it off. However, that might not be possible or sensible in all
9.2.0.1 Solaris, and yes, it does drop it
I created a unique index in the primary key columns
I created the primary key constraint without specifying an index
I checked that the index existed, it did
I dropped the primary key constraint
I checked that the index existed, it didn't
try it I
Yeah, it's a nuisance in most installations, but the idea is to be compliant
with some abbreviation_that_I'm_sure_Tim_can_remember security standard.
Give me a 7.1 doc site (if it exists) and I'll find the details. I failed
to find 7.1 doc on Google searches. Probably too much beer.
Mogens
I have 71620 for DG/UX ... tell me what to look for
...
Raj
__
Rajendra
Jamadagni
MIS, ESPN Inc.
Rajendra dot Jamadagni at ESPN dot
com
Any opinion expressed here is
personal and doesn't reflect that of ESPN Inc.
QOTD: Any clod can
Yes, but that's a special case. You are not rebuilding
the index as part of some regular index maintenance.
Jared
On Friday 27 December 2002 04:43, Rachel Carmichael wrote:
Here's a reason:
have you ever tried to find the three duplicate rows in a 12 million
row table without using the
I don't have access to 9.2.0.1 right now. But can you try creating a non-
unique index instead of the unique index. If you create a unique index, it gets
dropped. That's the behavior on 8.1.x also. But if it's a non-unique index, it
shouldn't get dropped.
Regards,
Denny
Quoting Rachel
Rick,
You're not considering a PK built with 'enable novalidate'.
Jared
On Friday 27 December 2002 05:38, [EMAIL PROTECTED] wrote:
If you know you have 3 duplicate records in the table then the PK must have
already been disabled so you have to rebuild anyway. I do not see
where you had to
Yeah, it sure does, if the index is unique.
Try out this test:
drop table y;
create table y(y number);
create unique index ypkidx on y(y);
alter table y add constraint ypk primary key(y);
alter table y drop primary key;
select table_name, index_name
from user_indexes
where index_name =
Title: RE: Those Pesky Little Audit Files (ora_9.aud)
Yupp, I do the same thing. I figure if there's a problem documented somewhere in those files and I haven't responded to them in 30-60 days then its too old to worry about anyway. Sometimes OWS wants an alert log which goes back to the
I generally follow as a practice to keep a variable v_version at package level or any script level . ( i have only packages ) and this v_version is nothing but $header$ in mks . This way I can always run a query to find out object versions in db . This is REALLY helpful specially when code is
We've
done a few tests here with chained vs. unchained rows, and the impact is
anywhere from 50-200% overhead. So if it took about 10 seconds to do
a query it will now take 15 to 30 seconds. It seamed to depend most
on which rows we were returning... not hitting the chained rows as much
Rachel,
Try a pre-created non-unique index. This should remain after the
constraint it dropped, and can be used to enforce the primary key
constraint (not to mention be created in parallel nologging mode.)
hth,
Jack
9.2.0.1 Solaris, and yes, it does drop it
I created a unique index in
Larry,
Don't want to preach to the Guru, but have you checked the values for 'table
fetch continued row'?
StatisticTotal per Secondper Trans
-
table fetch by rowid
The other day you sent me the query to find the direct and indirect
relationship between tables. In thesame way is it possible to delete
all/few lower level records based on the column value of the parent table?
If so, could you please send me the SQL queries for the same?
Please note
Probably because changing it from it's default
value of FALSE introduces a potential security
hole - trace files may be dumped at any time,
and may contain information that is deemed to
be confidential.
.
Regards
Jonathan Lewis
http://www.jlcomp.demon.co.uk
Coming soon a new one-day tutorial:
My guess would be that since it is a security risk, it's probably
not a good a idea to make it a supported parameter.
Jared
On Friday 27 December 2002 05:18, Jamadagni, Rajendra wrote:
O Oracle Guru's
Please tell us, why _trace_files_public is *STILL* an underscore
parameter??
Raj
I have crossposted this question on the Oracle-Apps list, I would like
to
get the opinion of this list as it is more of database issue as opposed
to apps.
The question is about LMT and extent management with regards to Oracle
11i.
When upgrading to 11i, it creates migrated LMTS as opposed to
I am in a peculiar situation where the development design is happening in
parellel.
It would table definitions, table data (Reference Data), View definitions,
the design itself ( LDM).
It would be a situation, where there are different schema's need to be
maintained at different stages of the
Title: RE: Those Pesky Little Audit Files (ora_9.aud)
Or you might have to do the cleanup sooner if you have 9202 on AIX 5.1 and you have external tables and you run into that (yet unknown) pmon memory leak (where it supposedly corrupts first 80 bytes of memory). When the instance finally
Used to use that method in a former company with our DB2 database. We had
one DB with schemas of DBPROD, DBTEST, DBSTST, and DBRTST. At various
testing stages we would move the objects to a different schema The
application had a variable for who owned the structure. That way we could
be
I've got a client that needs distributed
option installed on several databases,
versions 7.3.4, 8.0.5, and 8.1.7...
Problem may be I'm not sure we'll have all
the CD's as vendors of applications did
most of the installs and we think we'll
find that they took the CD's with them.
After all, if
When you do your testing, don't forget to keep an
eye on the change in dependent logical I/O and latching.
Fetching a migrated row will require an extra buffer
visit to find the row data. This MAY turn into an
extra disk read but at the least it IS another
buffer visit, which means another hit
I think I'll resist the temptation to review
the entire trace file. However, since this
is a v9 deadlock dump, I think you should
find that you have a complete processstate
dump after the initial deadlock graph.
Somewhere near the end of the dump you
should find the CURSOR section, which
should
Michael
By distributed, I assume you mean replication?
From what I can tell, basic replication is included with Standard Edition
and advanced replication is included with Enterprise Edition.
I think you run a script, something like catrep.sql in rdbms/admin, so
you should be able to get
Oracle 8.1.6 Win Nt
Has anyone experience/heard of performace problems after migrating from
forms 5 to forms 6.0.8.15 and from reports 2.5 to 3.0?
Thanks
Rick
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author:
INET: [EMAIL PROTECTED]
Fat City Network Services--
I've got hard-copy of the 7 docs if you can give me a clue where to
start searching for it...
--- Mogens_Nørgaard [EMAIL PROTECTED] wrote:
Yeah, it's a nuisance in most installations, but the idea is to be
compliant with some abbreviation_that_I'm_sure_Tim_can_remember
security standard.
Jared,
it was built with enable validate. Doesn't seem to matter if it's
validate or not.
Rachel
--- Jared Still [EMAIL PROTECTED] wrote:
Rick,
You're not considering a PK built with 'enable novalidate'.
Jared
On Friday 27 December 2002 05:38, [EMAIL PROTECTED] wrote:
If you know
I know I have 3 duplicates because that's how many I deleted when I got
rid of them.
If you use direct=true on sqlloader, the primary key constraint is NOT
disabled even if the index partition is made unusable.
and we know it's an app problem. That's a given. The app on occasion
re-runs part of
it'll have to wait until Monday, I'm not at work until then. I'll try
it with a non-unique then
Hey, if it works, it saves me tons of time, I learn something new and I
had fun developing the single SQL statement to rebuild the constraint
and index. Win-win
Rachel
--- Denny Koovakattu [EMAIL
Title: RE: Row Migration
also, what version of Oracle and how many columns on the table?
-Original Message-
From: Nick Wagner [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 27, 2002 9:39 AM
To: Multiple recipients of list ORACLE-L
Subject: RE: Row Migration
We've done a few
As Denny also suggested. I'm gonna try that on Monday, on my sandbox
database. If this does work in 9i as well (and it should, I hope), I
can just rebuild the unusable partition and not the entire index. The
index build will only have to happen once.
--- Jack Silvey [EMAIL PROTECTED] wrote:
fair enough. I retract the example :)
--- Jared Still [EMAIL PROTECTED] wrote:
Yes, but that's a special case. You are not rebuilding
the index as part of some regular index maintenance.
Jared
On Friday 27 December 2002 04:43, Rachel Carmichael wrote:
Here's a reason:
have you
Title: RE: Those Pesky Little Audit Files (ora_9.aud)
that
calls for a super-duper-pooper-scooper. :-)
-Original Message-From: Jamadagni, Rajendra
[mailto:[EMAIL PROTECTED]]Sent: Friday, December 27, 2002
1:09 PMTo: Multiple recipients of list ORACLE-LSubject:
RE: Those
I believe this is different than replication, though many of
the ideas and transactions would be the same.
In this particular case, they are going to allow Name and address
changes over the web. Those changes will cause updates to two some
what different customer files on two different
Thanks
for those comments, but that's a little down the road for what I'm looking at
right now -- trying to determine the overhead associated with updates and the
update causing a row to migrate. We don't intend to let the chaining actually
make it into the DM. But it's good to see someone
So I'm doomed? ;-)
Ok, so how am I going to know which block it went to, the first step towards
seeing if it was relatively nearby or maximum scatter? I'm guessing I would
have to dump a block and look at the placeholder or stub in the original
location and see where it points (I'm assuming it
John, the $10 is on the way ;-)
Right now I'm looking at the impact of rows migrating due to updates
expanding the rows. So I was considering fetch row continued as opposed to
analyze .. list chained rows (my first thought) before and after the update.
To know how many rows migrated due to the
Yes, that's right Jared, by doing this we can make schedule when we want to
backup our data onto hard disk or tape periodicaly (weekly or daily even
hour), thanks for your response and wishing you can help me to solve it.
-Original Message-
From: Jared Still [SMTP:[EMAIL PROTECTED]]
we have 8i (8.1.7.1) running at our shop and one of our developers wants to
use WebDB (what I understand is now Portal). in checking OTN and other
places, I can't figure out what version of Portal (or WebDB) I should be
installing, nor where I can get it.
can anyone tell me what version I
Hmm
A lot of folks on this liststudiously avoid OEM. I know I do,
and I'm not going to be much help on this.
Have you tried MetaLink?
Jared
On Friday 27 December 2002 17:11, Sony kristanto wrote:
Yes, that's right Jared, by doing this we can make schedule when we want to
backup our
IIRC, a 160m table would be in an LMT with 4m extents.
The 3 extent sizes recommended in the paper are 128k,
4m and 128m.
1) Create LMT Tablespaces with an extent size of 160k ? ( This is
ignored by the import, tables will be one extent big)
Not so. If you create an LMT of the correct
Geez, I didn't know you could do that.
Sheepishly,
Jared
On Friday 27 December 2002 03:38, Larry Elkins wrote:
Someone asked in a back channel email if parallelism is used. The select
portion of the update statement uses parallelism (though the updates
themselves get serialized) through the
Does OAS already include in Oracle 9i ?
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Sony kristanto
INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http://www.fatcity.com
San Diego, California-- Mailing list and web hosting services
Jared,
Thanks Jared for your opinion, perhaps my explaination ain't quite right so
it looks like complicated but I will try to give detail explaination. By the
way what is MetaLink ?
Rgrds,
Sony
-Original Message-
From: Jared Still [SMTP:[EMAIL PROTECTED]]
Sent: Saturday, December
Michael - Okay, this is a form of replication, known as synchronous
replication. That means that the updates occur synchronously, or within a
2-phase commit. This is implemented through database links. The drawback is
that the transaction is as slow as the slowest database. If one database is
Title: RE: Row Migration
Gaaa!! Neither did I!!!
(I've been looking for a better way to do that query for years...)
-Original Message-
From: Jared Still [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 27, 2002 6:49 PM
To: Multiple recipients of list ORACLE-L
Subject: Re:
don't feel too sheepish, I didn't know it either. Larry is the SQL guru
and I bow to his knowledge. and had already saved off this email as
this sort of update is something we do often and I ALWAYS have problems
figuring out the correct SQL :)
rachel
--- Jared Still [EMAIL PROTECTED] wrote:
Metalink Note #1022776.6 explains why.. :)
-
Kirti
-Original Message-From: Mogens Nørgaard
[mailto:[EMAIL PROTECTED]]Sent: Friday, December 27, 2002 10:49
AMTo: Multiple recipients of list ORACLE-LSubject: Re:
Those Pesky Little Audit Files (ora_9.aud)Yeah, it's a
MetaLink is Oracle's support site.
metalink.oracle.com
No, I don't think your explanation is complicated, I just
don't use OEM.
I fired it up to take a look, but the backup portion requires
the OEM repository to be setup, so I didn't learn anything.
Yes, I *do* make backups, but use RMAN
Actually, I first learned that trick from a Connor posting on this list
(maybe around 2 or 3 years ago?) It has to conform to the same key preserved
rules that updateable views do since that's what it is, just an in-line view
as opposed to an actual physical view. So supposedly it's been
85 matches
Mail list logo