Hi,

I'm writing an application that allows our client support reps to import
customer files of a predefined format into our database.  The CSR
uploads (form via CFFILE) the tab-delimited text file to the server, and
then I loop through the file contents and insert the data into the
database.  So far, it has worked great.

Well, we're getting bigger clients and we just got a 77,000 record file
to import, and my import script died at about 52k records.  In debugging
it, what I found was that I was bumping against the max JVM heap size
(we have it set at 512MB for all of our servers, and we came to this
number after a long and painful period of performance and reliability
testing of our application); if I bumped up the max heap size on my
development workstation, the import script worked fine.  Unfortunately,
that's not an option for our production servers, and I also expect that
our import files will keep getting larger.

So, my thinking is to split the really big import file into a number of
smaller files, probably 40-50k records per tab-delimited file.  However,
  I'm trying to figure out what the most elegant way is to split these
big files into smaller ones, and do it without killing performance.  Has
anyone done this?

Thanks,

Pete
[Todays Threads] [This Message] [Subscription] [Fast Unsubscribe] [User Settings]

Reply via email to