How about using a pipe and the split command if necessary?
 
If your unix machine can handle files greater than 2 gig's but just SQL*Plus can't, then:
 
1) mknod pipe.lst p
2) cat pipe.lst >mybigfile &
3) sqlplus u/p
        spool pipe.lst
        @my_commands.lst       
        spool off
        exit
4) rm pipe.lst
 
If the unix machine also has a 2 gig limit, then:
 
1) mknod pipe.lst p
2) split -b 1073741824 pipe.lst mybigfile_ &
3) sqlplus u/p
        spool pipe.lst
        @my_commands.lst       
        spool off
        exit
4) rm pipe.lst
 

These examples should give you enough to work with.
 
 
 
 -----Original Message-----
From: JOE TESTA [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 30, 2001 12:52 PM
To: Multiple recipients of list ORACLE-L
Subject: Re: 2 Gb file limit problem

how about this:
 
(avg_row_size + delimiters) * number_of_rows = total bytes.
 
total bytes / 1900000000 = number of pieces.
 
number_of_rows / number_of_pieces = number of rows per piece
 
select number of rows needed multiple times, spooling each one individually.
 
then sqlldr all the pieces.
 
joe
 


>>> [EMAIL PROTECTED] 07/30/01 02:20PM >>>
Hi List,
 
I need to transport few tables from one instance to another and of course found the sqlldr method much faster than the exp/imp.
But the problems is for large tables .When I spool such input tables to a flat file , it stops spooling into it after about  2 Gb. Any possible solutions to get around it. I am on AIX 4.3.3/8.1.5
 
My ulimits on AIX  are
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        unlimited
memory(kbytes)       unlimited
coredump(blocks)     unlimited
nofiles(descriptors) 2000
 
Thanks
 
Satish 

Reply via email to