Thnaks Joe,
Yes that was what I did as a work-around yesterday and I had to be around for a long time on a week-end. I have most of these  processes automated and it works fine with 95% of the tables . Doing this for about 600 tables. So this would mean going back to change code again. Was wondering if there was any straightforward options that I was missing.
 
Satish


>>> "JOE TESTA" <[EMAIL PROTECTED]> 07/30/01 11:44AM >>>
how about this:
 
(avg_row_size + delimiters) * number_of_rows = total bytes.
 
total bytes / 1900000000 = number of pieces.
 
number_of_rows / number_of_pieces = number of rows per piece
 
select number of rows needed multiple times, spooling each one individually.
 
then sqlldr all the pieces.
 
joe
 


>>> [EMAIL PROTECTED] 07/30/01 02:20PM >>>
Hi List,
 
I need to transport few tables from one instance to another and of course found the sqlldr method much faster than the exp/imp.
But the problems is for large tables .When I spool such input tables to a flat file , it stops spooling into it after about  2 Gb. Any possible solutions to get around it. I am on AIX 4.3.3/8.1.5
 
My ulimits on AIX  are
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        unlimited
memory(kbytes)       unlimited
coredump(blocks)     unlimited
nofiles(descriptors) 2000
 
Thanks
 
Satish 

Reply via email to