One of the guys here did some research and found that files over 32GB can cause data 
dictionary corruption. anyone have problems with this? we are using an automated 
transportable tablespace process with alot of logic and between many instances and 
servers.

we would prefer not to complicate the logic by having to introduce additional 
tablespaces to transport(cant do multiple datafiles in one tablespace because the 
datafiles need to be atomic to be transported). 

so anyone use datafiles larger than 32GBs. What happened? I know most of you dont, but 
we are in a unique situation. 

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: <[EMAIL PROTECTED]
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- 858-538-5051 http://www.fatcity.com
San Diego, California        -- Mailing list and web hosting services
---------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to