Take notes bidders!  These are good suggestions.

The trick, however, is making this process run without
killing the server, and having it all take place
within about 10 minutes.

---mark


--- Stephen Garrett <[EMAIL PROTECTED]> wrote:
> How many characters are we talking about here? (Ie ,
> total # of bytes
> of all of these columns (assuming ascii chars)??
> 
> o Create a temp table with same structure as end
> result table with
>   a seq id column. (table 1)
> 
> o bulk insert into temp table, with a ascending
> number key as
>   the added column which corresponds to the row
> number of the data file
> 
> o create another temp table (table 2)
>   - Insert into this table the rownumber and concat
> of all columns
>     so the organization looks like temp table 1.
> 
> o this essentially creates two tables, with a
> numeric key, and a very
>   wide text column in each
> 
> o Do your INSERT INTO table2 from table1 where "text
> columns from 
>   table 1 not in table 2)"
>   - follow with approprate row deletions
> 
> o delete all lines from orig DB table
> 
> o insert into orig DB table "MID-Strings" from temp
> table 2
> 
> Or,
> =========
> >> And of course the other variation is to write a
> >> custom C or VB app
> >> that reads the temp and origin tables into
> memory,
> >> does all of the
> >> compares, sets flags as to what has to be
> updated,
> >> then updates
> >> all of the rows as needed.
> =========
> 
> Steve
> 
> 
> At 11:46 AM 3/15/2002 -0800, Mark Warrick wrote:
> >There is no primary key.
> >
> >A "primary key" is formed by comparing all 15
> columns
> >of data with all 15 columns of data being imported.
> >
> >Got any other suggestions?
> >
> >---mark
> >
> >--- Stephen Garrett <[EMAIL PROTECTED]> wrote:
> >> Off the top of my head:
> >> 
> >> o Create a temp table with same structure as end
> >> result table
> >> o bulk insert into temp table
> >> o Write A SP
> >>  - user a cursor, loop thru all records in
> original
> >> table
> >>   - For each row, select from Temp the base
> matching
> >> row
> >>     (assumes there is a primary key)
> >>   - in the SP compare the columns of each
> selection
> >>   - if at least one does not match, break out
> early
> >> and
> >>   - perform an update on the current row for all
> >> fields
> >>     (variation is to update as you go)
> >> 
> >> 
> >> 
> >> Anyway, those are my thoughts...
> >> 
> >> Steve
> >> 
> >> 
> >> At 11:02 AM 3/15/2002 -0800, Mark Warrick wrote:
> >> >Hello All,
> >> >
> >> >I have a database of about 85,000 records which
> has
> >> 15
> >> >columns.  I need to update this database with a
> >> >datafile that contains the same schema and just
> as
> >> >many records.
> >> >
> >> >For each row that is going to be imported, I
> have
> >> to
> >> >compare all 15 columns of data for each row
> against
> >> >all 15 columns of each row in the database to
> see
> >> if
> >> >there's a match, and if not, then import the new
> >> data.
> >> >
> >> >Every query I've written with ColdFusion to do
> this
> >> >seems to kill the server.  Even comparing one
> row
> >> of
> >> >data seems to put extreme load on the server.
> >> >
> >> >Anyone got a clue as to how I might accomplish
> this
> >> >goal?  I may be willing to pay somebody to do
> this.
> >> >
> >> >---mark
> >> >
> >> >
> >>
> >__________________________________________________
> >> >Do You Yahoo!?
> >> >Yahoo! Sports - live college hoops coverage
> >> >http://sports.yahoo.com/
> >> >
> >>
> >
> >
>

______________________________________________________________________
Structure your ColdFusion code with Fusebox. Get the official book at 
http://www.fusionauthority.com/bkinfo.cfm
Archives: http://www.mail-archive.com/[email protected]/
Unsubscribe: http://www.houseoffusion.com/index.cfm?sidebar=lists

Reply via email to