-Original Message-
Of course, we took the EXACT SAME BACKUP, restored it to another
filesystem and have NO corruption in the database. But it can't
possibly be hardware problems. It's just Oracle playing games with my
mind.
--
Sunspots.
--
Rachel,
Were you running the validate command on your backups?
It would be interesting to see if that wasn't cutting
the mustard either.
Brian Spears
-Original Message-
Sent: Wednesday, February 26, 2003 9:59 PM
To: Multiple recipients of list ORACLE-L
I wish it was
it's not a hardware problem. the fact that the filesystem failed at 6AM
this morning is merely a collective hallucination
yes, it went down hard. My database was not on it, I had insisted they
move all the files. They didn't move the Oracle binaries though (there
is no hardware problem) so we are
Subject: RE: corrupted block
Sent by: root
Brian,
I can ask. When I try to do something Oracle on the production boxes I
get my hand slapped and am told we pay the hosting company to do
that. When I ask, I sometimes get the info I need.
Rachel
--- Spears, Brian [EMAIL PROTECTED] wrote:
Rachel,
Were you running the validate
Ome of our sys admins once assigned two file systems to the same area of disk which as
you might expect caused a multitude of problems. I don't believe the I/O system
complained at all when one file system would overwrite blocks written by another.
Ian MacGregor
Stanford Linear Accelerator
cc:
@yahoo.com Subject: RE: corrupted
block
Sent by: root
We had a session with an expert on Monday and he recommended export to
\dev\nul to detect errors in the database.
Yechiel Adar
Mehish
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Monday, February 24, 2003 10:41 PM
I had the same belief that RMAN
On Wed, 26 Feb 2003, Yechiel Adar wrote:
We had a session with an expert on Monday and he recommended export
to \dev\nul to detect errors in the database.
Well the expert isn't going to find any corruptions in indexes that
way.
--
Jeremiah Wilton
http://www.speakeasy.net/~jwilton
--
Please
I'm dealying with the same RMAN not checking corruption -- on 9.2.0.1
and Solaris. and it's a data warehouse.
So far I've got 9 corrupted datafiles and over 40 corrupted objects.
fortunately most are indexes.
it's going to be a good day. NOT
--- Yechiel Adar [EMAIL PROTECTED] wrote:
We had
An export is a great method for catching corruptions in tables. However,
it does not read indexes, so it misses those corruptions. Analyze and dbv
will.
Yechiel Adar wrote:
We had a session with an expert on Monday and he recommended export to
\dev\nul to detect errors in the database.
-Original Message-
I'm dealying with the same RMAN not checking corruption -- on 9.2.0.1
and Solaris. and it's a data warehouse.
I've seen it detect corruption, and not detect it. I think it detects some
kinds, but not all kinds. It seems to do better with finding it in
Welcome to my week :)
Rachel Carmichael wrote:
I'm dealying with the same RMAN not checking corruption -- on 9.2.0.1
and Solaris. and it's a data warehouse.
So far I've got 9 corrupted datafiles and over 40 corrupted objects.
fortunately most are indexes.
it's going to be a good
here's the fun part in this:
this is being handled by the hosting company who manages our production
data center.
apparently rman detects corruption on the restore and writes error
messages to the alert log, not the rman log. Except the monitoring
software didn't look for the word corrupt
---
Rachel -
Do you actually have the error text from the alert log?
Looks like I have something to add to my Perl script... :)
Brian
--
| Brian McGraw /* DBA */ Infinity Insurance |
| mailto:[EMAIL PROTECTED] |
Brian,
Can I get a copy of that Perl script once you've added that check?
Rachel
--- Brian McGraw [EMAIL PROTECTED] wrote:
Rachel -
Do you actually have the error text from the alert log?
Looks like I have something to add to my Perl script... :)
Brian
Here you go:
***
Corrupt block relative dba: 0x024a (file 9, block 10)
Bad header found during buffer read
Data in bad block -
type: 32 format: 0 rdba: 0x20202020
last change scn: 0x2020.20202020 seq: 0x20 flg: 0x20
consistency value in tail: 0x20202020
check value in block header:
-
From: Rachel Carmichael
To: Multiple recipients of list ORACLE-L
Sent: 2/26/2003 12:21 PM
Subject: RE: corrupted block
here's the fun part in this:
this is being handled by the hosting company who manages our
production
data center.
apparently rman detects corruption on the restore
What do you mean by the Rman log rachel? Are you talking about the
v$backup_corruption view?
From the Oracle RMAN Reference:
If the server session encounters a datafile block during a backup that has
already been identified as corrupt by the database, then the server session
copies the corrupt
That's the cutest corruption I've ever seen -
it looks like someone has been practising
there C programming with How to write direct
to an Oracle data file without using Oracle
Regards
Jonathan Lewis
http://www.jlcomp.demon.co.uk
Coming soon one-day tutorials:
Cost Based Optimisation
dd if=$HOME/.profile of=/u01/oradata/PROD/blahblah01.dbf bs=8192 seek=10
or
sqlplus / as sysdba
spool /u01/oradata/PROD/blahblah01.dbf
...
spool off
it doesn't take much!
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Wednesday, February 26, 2003
I wish it was someone trying to do that. This is what I get after I
restore a good rman backup to a bad disk.
I have hundreds of these messages (or similar ones) in that alert log
file sigh. My data center operations people are insisting that it
CAN'T be hardware problems.
Of course, we took
See Note:61685.1 (metalink)
Good luck
Waleed
-Original Message-
Sent: Monday, February 24, 2003 11:09 AM
To: Multiple recipients of list ORACLE-L
I recently inherited a 40GB 7.3.4 database (yes, it needs to upgrade).
Last night I analyzed the tables and a corrupted block was found.
Suzy,
Just more questions:
Are your sure that this corruption has made it to the disk? It could be memory
related.
Can you export the table to /dev/null to double check the corruption?
What do you get when reading that particular block using dba_extents?
- Kirti
-Original
Rama Velpuri's book had the answer to how to copy rows from a table when
a corrupted block exists. The downside is the table is roughly 18GB,
and has LONG.
So my next question, is there any way to determine by trace file when
the block corruption occurred? I'm still under the assumption
I think more recent versions of Oracle have options for skipping corrupt
blocks with exports.
One possible way:
If you have a valid primary key index on the table, and the index is in a
good tablespace, you might be able to cycle through all the primary keys,
select the row corresponding to that
Have you tried copying it into a new table?
Assuming that you have tried and failed, try creating a new table something
like this:
Create new_table as (select * from old_table where substr(rowid,1,8) !=
02457856);
I believe that that's the way the rowid was set up in Oracle 7.3.4 but my
Hi,
If you can afford to forget the data in the corrupted block you can use
the event 10231 to skip the corrupted block during table scan. Set the
event and you can do a CTAS with a new table name and then you can
rename that as original table after dropping the original table.
Here is the
Suzy -
Here is an article that explains it well. Hopefully this will work with
7.3.4.
http://www.fortunecity.com/skyscraper/oracle/699/orahtml/oramag/16tech.html
Once you get past the immediate crisis, there are a couple of ways to detect
block corruption more quickly.
Dennis Williams
DBA,
Hi Suzi,
The first thing I would suggest is to determine if it is actualy in use by the database (ie allocated to an object)... dbv has an "os perspective" on the file and hence does not understand what objects contain what blocks. Metalink note Doc ID: 28814.1 has some good basic information on
Thanks Kirti. Interesting, dba_extents doesn't return rows for
block_id=57856. However, export to /dev/null does report the
corruption. Does this indicate disk or memory corruption?
Deshpande, Kirti wrote:
Suzy,
Just more questions:
Are your sure that this corruption has made it to the
Suzy,
The big question is whether or not the block actually contains data.
It appears that it does not, if I am reading the last few lines
correctly. This means you are in luck. Use a non-full table scan query
to extract the data, drop the tablespace and remove the datafile.
Recreate the
or a later version 28814.1
which has a section salvaging data from tables
--- Khedr, Waleed [EMAIL PROTECTED] wrote:
See Note:61685.1 (metalink)
Good luck
Waleed
-Original Message-
Sent: Monday, February 24, 2003 11:09 AM
To: Multiple recipients of list ORACLE-L
I
I would add to my previous post, that the things that were supposed to allow
me to skip the corrupt block did not work. I guess the moral being: Don't
believe everything you read on Metalink (or elsewhere). That's why I
eventually resorted to using the primary key index to grab one row at a
:
Sent by: rootSubject: Re: corrupted block
Suzy,
I think it is memory related. May be un-caught memory leak or similar.. Did you get
any ORA-600 errors?
The trace file reports 'Entire contents of block is zero - block never written'.
DBWR, at some point would have crashed the database if it attempted writing to the
corrupted block.
Stephen
RMAN ignored your corrupted block? Ya gotta tell us more man! We're
relying on it to catch everything. Did you have the MAXCORRUPT parameter
set?
Dennis Williams
DBA, 40%OCP, 100% DBA
Lifetouch, Inc.
[EMAIL PROTECTED]
-Original Message-
Sent: Monday, February 24, 2003 11:45
I'm not aware of the MAXCORRUPT parameter. There were two blocks involved.
We think it was caused by an incompatibility between an OS driver and some
piece of new storage hardware. The symptoms were that any query (including
an export) that scanned table would be going along then suddenly get
I had the same belief that RMAN catches the corruption earlier, but not NOW.
We had a database crash two months back and while performing the
recovery(RMAN) one of the restored data file was corrupted. *BIG SHOCK* to
everyone..We ran the dbverify on the restored files, the corruption showed
Nope, SQL-Backtrack. I need to dig into those docs to see if that is a
feature or configuration issue.
DENNIS WILLIAMS wrote:
Stephen
RMAN ignored your corrupted block? Ya gotta tell us more man! We're
relying on it to catch everything. Did you have the MAXCORRUPT parameter
set?
This might work, we can afford to forget the data. We are in the
process of purging old data and this row already meets the criteria for
purge.
Thanks to everyone for your input! Haven't tried anything yet as I had
to drop this issue to work on a more urgent matter (if you can imagine
that).
Metalink Note 130605.1 is worth reading about setting Maxcorrupt
for an RMAN whole database backup (and checking alert.log for corruption
messages).
Metalink Note 207413.1 describes RMAN incorrectly reporting block corruption.
Bugs 2068275, 1849726, and 1802432 may be interesting
42 matches
Mail list logo