Larry
fb: indicates the flags.H means header in this rows. F means First piece of the row. L means last piece of the row. Since both FL are present in here there is no row-chaining.
lb: lock byte indicates the ITL array index of the transaction that is holding the lock on the row. It is
Jared
Say, process A is interested in reading a block, then it hashes the data block address of the block to find the hash bucket in the buffer cache. If that specific block is in the buffer cache, then it must be attached with that hash bucket. Holding the hash bucket latch, the process A
Hi
v$controlfile_record_section is based on the fixed table x$kccrs. This does provide values for all the columns except maxlogmembers. Here is the query to pull the information..
select
decode (indx, 3,'MAXLOGFILES',4,'MAXDATAFILES',2,'MAXINSTANCES',9,'MAXLOGHISTORY') , rsnum from x$kccrs
All right, Here is the complete query to get the information from the control file.
select
decode (indx, 3,'MAXLOGFILES',4,'MAXDATAFILES',2,'MAXINSTANCES',9,'MAXLOGHISTORY
') , rsnum from x$kccrs
where indx in (3,4,2,9)
union all
select 'MAXLOGMEMBERS ',dimlm from x$kccdi
/
Thanks
Riyaj Re-yas
Your error message indicates file #1 associated with the system tablespace and the trace file indicates the file #6. I would do the following
1. Check the OS error message for any hardware errors.
2. Find what objects are corrupted. To get this information use dba_extents and the given
Tom
This is because the index was created using the nologging option after the hot backup was taken. If the table was involved in a nologging operation , that also will have the same problem.
Thanks
Riyaj Re-yas Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
Terrian, Tom
Bishwa
You may be running in to Pentium P4 bug. Refer to doc. ids 115160.996 and 15730.996. If I remember correctly, you have to copy the whole CD in to hard drive, download and modify few files. You will be able to do the installation from the hard drive..
Thanks
Riyaj Re-yas Shamsudeen
Steve
I thought the drop/truncate of the objects will do 'checkpoint object ' call and DBW in turn will do extent based checkpointing. I am not even going to try argue with you :-) Could you please explain to us : What are the reasons for 'checkpoint object call ' and ' reuse block range
Hi Jeremiah
When you truncate or drop an object, all the extents/buffers associated with that object need to be flushed to the disk, meaning an object checkpoint has to take place. DBW recieves an object checkpoint call, from the client process. Client process then waits for the IPC message
Cherie
You could turn it off using _fast_full_scan_enabled=false at the instance level. fast_full_scan_enabled is simply an underscore parameter in 8.1.7. But, I would turn on the event 10053 and run the statement to find why CBO is not using.. Could it be possible that this behaviour is
--I am willing to donate up to $5 per month for up to 6 months paid either monthly or a lump sum depending on what he needs.
--Just like in Public Television or Radio, I challenge everyone on the list to join me with an equal pledge.
Count me in...
Jarad/Bruce
Please let us know how you
Not true, at least in 8i. Using 'alter table move ' you could change the initial extent size..
SQLalter table test move storage (initial 2M);
Table altered.
Only catch here is that 'move' command doesn't apply to tables having long ,long raw, LOB etc..Initial extent can be changed for the
Ethan
You are committing 50 times / second which is quite a bit high. So, LGWR has to write out the buffers at least 50 times in a second (excluding the group commits). I didn't quite follow the initial threads, but are you using file system based log files ? For this type of high activity, I
v$resource_limit. Look at the max_utilitization and initial_allocation columns..
Thanks
Riyaj Re-yas Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
Post, Ethan [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
08/15/01 02:51 PM
Please respond to ORACLE-L
To:Multiple
Hermanto
Rollback segment data files are *needed* for recovery. Rollback segments are the key component for consistent media/instance recovery. If you lose any of the rollback segment files and if any extents from that datafile are needed then the database can not do the recovery consistently
Yes. It can. In fact, AND_EQUAL hint can be specified to use multiple indexes and merge them for single table access path..
From tuning guide:
AND_EQUAL
This hint explicitly chooses an execution plan that uses an access path that merges the scans on several single column indexes.
Thanks
Hi
Recovery process will read from undo$ table to identify all the rollback segments that were online. Even if you comment out the rollback segments from the init file, still Oracle knows what rollback segments were online from the undo$ table. If i remember correctly, after reading the undo$
Hi all
I did some research in my test database and here is what I found
out: I had events 10500,10221 and 10218 set in the database to trace the
instance recovery. 10218 was particularly useful since it was dumping the
uba of applied undo.
a. Shutdown abort did not rollback any
Ian
These rows relate to the library cache pin on the metadata about the objects. Metadata for the objects need to be pinned during the execution of the statement to prevent the invalidation of the dependent objects. Since each one of this sid has this object pinned, I think, you have objects
Sujatha
Yes, it will work assuming your production and test boxes are of the same OS, version etc. Technically, at least with the same endianness. You should take copy all the files including control files , init files, data files and redo log files to the test server. Then you could bring up
Hi Henry
You may be hitting a bug 1399885 due to timer overflow. Can you check the uptime ? Also check the clock ticks per second. Due to a timer overflow problem, when the machine is up and running for more than 248 days (ticks=1000) or 24 days (ticks=100) then the database will hang and
Hi
Your deadlock problem seems to be related to distributed transactions since your processes are holding and waiting for DX type locks. DX enqueues are taken for distributed queries. Do you have queries accessing the remote tables over the database links ?
As far as I know, even queries
Hi Anita
If it is an ITL problem, then the resource type would be TX instead of DX. If the ITL table is full then the process requesting an ITL entry in that block will randomly select one of the transaction holding an ITL entry and wait for that process to complete or rollback. Since the
Brijesh
There are few conditions where the optimizer will select CBO even when the optimizer_mode is set to rule. Do you have any objects in the schema with a parallelism 1 (tables and indexes) ? One of the common problem is that DBAs rebuild the index with higher parallelism and forget to
Bala
I don't see any responses. So, please ignore if this is already answered.
First of all, In Sun, you have to use RAW for an Oracle Parallel Server database. (I am not sure whether you can use Veritas Quick I/O to configure OPS, my guess would be no). Second, if it is a performance
Hi
I think, you may be hitting another parameter before you exceed distributed transactions limit. If you look at your original trace the enqueue is DX-003b-. Here 3b is hex representation of 59. Considering the initial allocation is 60, this may be the last element in the
Hi
One of our developers are planning to migrate from 32 bit OCI to 64 bit OCI. Have any of you encountered any issues with that ? How much performance improvement can we expect ?
Thanks
Riyaj Re-yas Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
Hi Bruce
You need to find what blocks are hanging from the particular latch. You could get this information by joining x$bh and v$session_wait tables while the sessions are waiting for this latch.
select distinct ts#, file#, dbarfil,dbablk from x$bh
where hladdr in
(select p1raw from
Pankaj
Always try to keep the shmmax value around 80% to 90% of the available physical memory. So your setting for shmmax would be
set shmsys:shminfo_shmmax=209715200
You said it failed ? What failed ? Software installation, Database creation ? If your database creation failed, then look
Raymond
EBCDIC is the format few mainframe computer uses. Unix systems typically uses ASCII. I am not sure about the type of the file, but you may have to convert the file from EBCDIC to ASCII to make any meaning ful use of the file. You could use dd command for this conversion..
dd if=input
One more cons : If the surrogate keys are populated from a single sequence, then the primary key index of that table becomes right hand index. Meaning all the new values will go in to the right most leaf block of the index due to the nature of the monotonically increasing or decreasing values. If
Hi Lisa
SMP - In SMP many CPUs share the same memory. Typical example would be E450, 4 processors, 4GB memory. Only L2 cache in the CPUs need to be co-ordinated when a memory buffer is changed or intended to change.
MPP - In MPP each CPU has its own memory and its own disks that they
Hi Elain
I had similar questions some time ago and went through a small experiment/analysis to understand.
When you start a parallel query process, following happens:
1. Current SCN is captured for the consistent or snapshot SCN.
2. Objects accessed by the query is flushed
That is interesting. It should start with sequence 66 logically. Is it possible that some or few of the tablespaces were left in the hot backup mode on sunday/monday morning ? Please check the alert log. My guess is that, one of the tablespace was in the backup mode, say from Sunday at least, and
Ian
Quick IO does bypass the unix buffer cache completely, thereby avoiding few problems such as double buffering, double copying, vnode locks associated with the ufs,xfs or veritas non-quick I/O files. Oracle can use Quick I/O capability and asynchrous I/O on these Quick I/O files are also
Nope. Not at all. In fact, analyze_database analyzes data dictionary. I don't see why it would hurt performance to analyze the sys schama.
Here is the catch though:- do not analyze the sys tables using 'analyze command' since analyzing the few internal tables can cause deadlocks. But the
Look at v$session_wait. It should tell you what the users are waiting for.. Usually, SQL*NET waits are ignorable..
Thanks
Riyaj Re-yas Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
hamid alavi [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
06/06/01 01:58 PM
Please respond to
Yep. That's what I meant :-) In fact, if you select parallel server option, relink fails. There is a flag missing and not documented anywhere other than one metalink post..:-) Took two weeks just to install the software and talk about fire all your DBAs crap..
Thanks
Riyaj Re-yas Shamsudeen
You could get more information about this subject in Steve Adams website..Here is the few lines from his site:
Public synonyms should be entirely avoided in high concurrency environments. The memory wasted is proportional to the number of public synonyms in use, times the number of users. The
Dick
As far as I know, at least in Sun, sequent, hp and linux, you need to have control files and redo log files in raw disk. Every instance needs access to other thread's redo log files to do instance recovery for the failed instances. Only if the data is in the raw disk, multiple nodes can
Ray
I think, you are referring to RAC, Real Application Cluster. Which is same as OPS with cache fusion for all scenarios. Please see my previous mail thread with the same subject..
Thanks
Riyaj Re-yas Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
Ray Stell [EMAIL
Analyze index validate structure takes shared lock on the table disallowing any changes to the table or indexes. That's the only way to validate the structures.
In earlier versions (7.3 ?) 'analyze ..compute' used to take table level locks. From 8 onwards, compute or estimate does not take any
Are you trying to find maximum size of the IO possible in Sun/Solaris ?
Kernel parameter maxphys parameter determines the maximum size of the IO
possible from the OS. By default, it is around 128k depending upon what
hardware you use(sun4d=124k, sun4u=128ketc ). If you user veritas volume
Hi Linda
Welcome to Oracle! I am aware that db2 has wealth of documentation and manuals, but you would n't find these information any where in the Oracle documentation..
Anyway , here is the info that you are looking for: Again, this is from my memory so use caution..
siz: Indicates the size
You could also use indexfile=filename option during import. This will not import any data but will write DDL to the index file. Create table statments will be REM med in the index file. Just remove those REM from the index file, change the storage parameter for the tables and indexes and run the
Hi Ed
I think, For full index scan, oracle server process reads the root block and branch blocks 1 block at a time. It reads the leaf blocks at a db file multiblock -read count blocks per read call. For the fast full scan, it reads all the block from the segment header to high water mark at a
May be you could rewrite your queries like this..
select /*+ ORDERED use_nl(b) index(a index_on_pk_category_id) index(b index_on_fk_category) */
depth, title, description, url, editor_choice
from category a,site b
where b.fk_category = a.pk_category_id and
a.status=0 and
b.status=0
dbms_lock.sleep will do this...
Thanks
Riyaj Re-yas Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
05/11/01 10:20 AM
Please respond to ORACLE-L
To:Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
cc:
Hi
We had ORA-00600 [729] problem on a regular basis too. I am not sure
about your database errors, but in out database these errors are due to
audit_trail being turned on. In the PGA Ptvt. Audit Info memory chunks are
not being considered and they are being falsely identified as space
Primary reason not to switch more often then 30 minutes to 1hour, is to
reduce the checkpointing frequency. If you switch every 3 to 4 minutes,
then waits for checkpoint related activity will be higher. Hence you may
want to increase the log file size and also set your
log_checkpoint_interval,
Hi
Unlimited tablespace privilege is granted with resource role. This
privilege is granted directly to the user rather than through the resource
role. Hence you won't see this in dba_sys_privs view.
Thanks
Riyaj Re-yas Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
You can also set _trace_files_public=TRUE in the init. ora file.This will
create trace files with 644 file permissions..
Thanks
Riyaj Re-yas Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
My opinions. Use at your risk
How big is your shared pool size ? Do you have many dynamic SQLs ? You may
want to keep the sys packages in the shared pool, so that your shared pool
won't be fragmented by dynamic SQLs.
Thanks
Riyaj Re-yas Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
Hi Lisa
This is an underscore parameter. Hence v$parameter will not show this
parameter.. You got to look at x$ksppi and x$psppcv fixed tables to see
this parameter..Here is the SQL used if you are interested:
--cut
set linesize 80 pagesize 50 newpage 0
col param_dflt form a5 head
I agree that the log buffer is too big. But why do you want to decrease the
log file size from 32M to 5MB. This will increase the checkpoint frequency
and will create performance hit due to increased checkpoints. Also, I would
add log files to use all three disks rather than just 2 disks.
I would not decide whether a block is corrupted or not, just using dbv
utility. dbv reports corruption, even when analyze, exp and FTS goes
through fine without any problem. dbv reported a data dictionary corruption
in our case. We ran analyze, exp and FTS, no problem. But still dbv was
Hi
For each physical device a BCV is associated. This is at a physical
level rather than at logical level. Timefinder is the product which
synchronizes these devices and bcvs. Each of these devices and BCVs has a
bit map track table to keep track of the disk tracks. This bit map
indicates
I think, your library path is not set correctly.
(ksh)
In sun
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
In HP
export SHLIB_PATH=$ORACLE_HOME/lib
IN AIX
export LIBPATH=$ORACLE_HOME/lib
and then try again...
Thanks
Riyaj "Re-yas" Shamsudeen
Certified Oracle DBA
i2 technologies
Hi
If you have constraints enabled (primary, unique and referential), then
CTAS approach needs reenabling them in the new table. This will take
longer. But alter table does not need to disable or enable the constraints
and hence alter table move would be more preferable..
Thanks
Riyaj
Yes. This is harmless. Lamport scheme is used for SCN generation.
Particularly applicable to OPS databases since the SCN has to be genarated
across all the instances uniquely. Conceptually, determines how the recent
SCN will be propogated between the instances.
Refer to OPS concept guide, if
Message for magesh:
SELECT TRANSLATE(')81$!.;%-', '1234567890!@#$%^*()-=_+;,.',
'ABCDEFGHIJKLMNOPQRSTUVWXYZ ' )
from dual;
Thanks
Riyaj "Re-yas" Shamsudeen
Certified Oracle DBA
i2 technologies www.i2.com
Hi
There is a 4GB virtual memory limit / process in 32 bit unix systems.
Oracle attaches the shared memory segments at 0x8000 by default and
grows upwards. Since the dlopen calls attaches at 0xEF00, the limit is
around 1.73 GB. But you could change the ksms.s module sgabeg variable
Hi Johan
Will this work ?
Person A books the product he wants with 'select * from t1 for
update'. At this point, he has lock on the row.
Person B comes in, looks at the product and does a 'select * from t1
for update nowait'
If the row is locked for update then person B will
Hi
How did you find you have corrupt blocks ? Did you have any ORA-1578
errors ? Any alert log messages for the same ?
I would first find out which objects exist in that particular corrupted
block (or ) alternatively you could analyze all the objects in the system
tablespace like this.
Hi elain
I have not personally encountered performance degradation setting this
to high value. But I will attempt to help you.
DBWR process is waiting for the IPC messages from the background
process or from the server processes, for the requests to write the dirty
buffers. That's how
Hi
I think, you removed the listener log file while the listener is up and
running. When you removed the listener log, listener was having a file
pointer open. Since the file pointer is not closed, the listener is still
writing to the same node.
If you stop and start the listener, you
Hi
You could get the following columns from the v$session table and then
use dbms_rowid.rowid_create to construct the rowid:
ROW_WAIT_OBJ#NUMBER
ROW_WAIT_FILE# NUMBER
ROW_WAIT_BLOCK# NUMBER
ROW_WAIT_ROW#
Hi
I have enabled the priority paging on Solaris in my earlier job.
priority paging is particularly effective for the file system based
databases. The system performance used to take a nose dive during hot
backup and enabling and playing around cachefree parameter helped us to
resolve the
68 matches
Mail list logo