We're on SQL Server 2000, app servers are MX 6.1 on Win2K3.

I saw the BULK INSERT command, the problem is that there isn't a 1:1
relationship between the columns in the database and the data in that
tab-delimited text file (i.e. there are some system-specific columns in
our database that don't exist in the text file), and there doesn't
appear to be a way to specify columns in the BULK INSERT command.

The other issue is that my script keeps track of those records that
can't be imported (since there is no way to guarantee absolute
compliance with how the data is exported from the original app, we do
get nulls in fields that are required and that sort of thing), and I
don't see a way to do that with BULK INSERT.

Pete

Greg Luce wrote:
> Are you using MSSQL? If so, check out BULK INSERT.
> http://doc.ddart.net/mssql/sql2000/html/tsqlref/ts_ba-bz_4fec.htm
>
> -----Original Message-----
> From: Pete Ruckelshaus - CFList [mailto:[EMAIL PROTECTED]
> Sent: Thursday, May 20, 2004 10:37 AM
> To: CF-Talk
> Subject: Text file problem
>
> Hi,
>
> I'm writing an application that allows our client support reps to import
> customer files of a predefined format into our database.  The CSR
> uploads (form via CFFILE) the tab-delimited text file to the server, and
> then I loop through the file contents and insert the data into the
> database.  So far, it has worked great.
>
> Well, we're getting bigger clients and we just got a 77,000 record file
> to import, and my import script died at about 52k records.  In debugging
> it, what I found was that I was bumping against the max JVM heap size
> (we have it set at 512MB for all of our servers, and we came to this
> number after a long and painful period of performance and reliability
> testing of our application); if I bumped up the max heap size on my
> development workstation, the import script worked fine.  Unfortunately,
> that's not an option for our production servers, and I also expect that
> our import files will keep getting larger.
>
> So, my thinking is to split the really big import file into a number of
> smaller files, probably 40-50k records per tab-delimited file.  However,
>   I'm trying to figure out what the most elegant way is to split these
> big files into smaller ones, and do it without killing performance.  Has
> anyone done this?
>
> Thanks,
>
> Pete
>
>
>
>
>
[Todays Threads] [This Message] [Subscription] [Fast Unsubscribe] [User Settings]

Reply via email to