|
how about this:
(avg_row_size + delimiters) * number_of_rows = total
bytes.
total bytes / 1900000000 = number of pieces.
number_of_rows / number_of_pieces = number of rows per
piece
select number of rows needed multiple times, spooling each one
individually.
then sqlldr all the pieces.
joe
>>> [EMAIL PROTECTED] 07/30/01 02:20PM >>> Hi List,
I need to transport few tables from one instance to
another and of course found the sqlldr method much faster than the exp/imp.
But the problems is for large tables .When I spool
such input tables to a flat file , it stops spooling into it after about
2 Gb. Any possible solutions to get around it. I am on AIX
4.3.3/8.1.5
My ulimits on AIX are
time(seconds)
unlimited
file(blocks) unlimited data(kbytes) unlimited stack(kbytes) unlimited memory(kbytes) unlimited coredump(blocks) unlimited nofiles(descriptors) 2000 Thanks
Satish
|
- 2 Gb file limit problem Satish Iyer
- Re: 2 Gb file limit problem Satish Iyer
- RE: 2 Gb file limit problem JOE TESTA
- RE: 2 Gb file limit problem Brian MacLean
