The honor should go to Steve Adams, I read it from his website.
Tanel.
And we do need education, because we do want to be bricks in the wall.
You seem to know everything, so please, don't leave us alone.
On 2003.11.07 12:49, Tanel Poder wrote:
For reasons why, think about it from the
One of the guys here did some research and found that files over 32GB can cause data
dictionary corruption. anyone have problems with this? we are using an automated
transportable tablespace process with alot of logic and between many instances and
servers.
we would prefer not to complicate
I think there was something in Metaclick about files
in 32-bit OS's not being able to extend much over
32Gb, even with extensions. That's Unix flavours and
32-bit Windoze. Much larger than that and you are
definitely in exclusive 64-bit territory.
Cheers
Nuno Souto
[EMAIL PROTECTED]
-
Ryan,
Oracle can certainly transport more than one datafile at a
time. I'm not sure what you mean by the datafiles need to
be 'atomic' to be transported, but it is certainly a
limitation of the application logic, not Oracle. You could
transport every single PERMANENT tablespace in a database
For reasons why, think about it from the backup/restore
perspective. Which database can be backed up or restored
faster: one with 100 2Gb datafiles or one with 2 100Gb
datafiles? Datafile management is just like extent
management. As Roger Waters said, All in all, they're all
just bricks
And we do need education, because we do want to be bricks in the wall.
You seem to know everything, so please, don't leave us alone.
On 2003.11.07 12:49, Tanel Poder wrote:
For reasons why, think about it from the backup/restore
perspective. Which database can be backed up or restored
faster: