Hi Steve,

Yes, thank you.  I'm totally aware of database
normalization.  However, the source file is something 
I have no control over and there is no primary key in
it.

---mark

--- Stephen Garrett <[EMAIL PROTECTED]> wrote:
> I guess I am in too much of a hurry today, sorry.
> 
> Good DB organization practice would dictate that if
> the data set
> that you are dealing with has no primary key, then a
> primary key
> should be added as a column when creating the data
> table that will
> contain the data set (records of data). Hence the
> SEQ comment.
> 
> Now other portions of an application may make use of
> this SEQ key,
> but perhaps not. If they do, then it does not seem
> proper to me
> to always delete the previous row, with it's unique
> SEQ, and recreate
> the exact same row of data with a different SEQ (ID
> number).
> 
> Anyway, those were my thoughts, and hope that helps.
> 
> Steve
> 
> At 03:28 PM 3/15/2002 -0500, Cottell, Matthew wrote:
> >Forgive me if I don't know the terminology,
> >I'm not a full time db developer.
> >But a SEQ Key sounds like a primary key.
> >And it was already stated that there is no primary
> key.
> >
> >And I'm not sure why a row wouldn't stay intact.
> >Could you elaborate?
> >
> >Matt
> >
> >
> >> -----Original Message-----
> >> From:      Stephen Garrett [SMTP:[EMAIL PROTECTED]]
> >> Sent:      Friday, March 15, 2002 3:14 PM
> >> To:        SQL
> >> Subject:   RE: need help with large database update
> >> 
> >> I think this is a good idea, but doesn't it
> depend upon whether
> >> you need to keep an original row intact, as it
> probably has
> >> a numeric SEQ Key assigned to it?? 
> >> 
> >> How could you do that with the distinct method,
> and keep the original
> >> row in place? (eg it had an assigned part number
> or sum such)
> >> 
> >> Steve
> >> 
> >> At 03:05 PM 3/15/2002 -0500, Cottell, Matthew
> wrote:
> >> >Couldn't you insert the all the records in one
> fell swoop.
> >> >Then perform a Select Distinct on all the rows?
> >> >Insert those records into a new table, and
> Voila, your done.
> >> >
> >> >As I understand what you're saying,
> >> >either the row is a complete match, or its a
> completely unique record.
> >> >There's no instances of some of the data being
> the same and having to
> >> choose
> >> >which record to include.
> >> >Or am I missing something?
> >> >
> >> >Matt
> >> >
> >> >
> >> >> -----Original Message-----
> >> >> From:   Mark Warrick [SMTP:[EMAIL PROTECTED]]
> >> >> Sent:   Friday, March 15, 2002 2:45 PM
> >> >> To:     SQL
> >> >> Subject:        Re: need help with large database
> update
> >> >> 
> >> >> I have to compare each row of new data with
> existing
> >> >> data because there are no primary keys.  So I
> can't
> >> >> just append new data into the table because
> then I
> >> >> might have duplicate data in the table.
> >> >> 
> >> >> Just to be clear, "New" data doesn't
> necessarily mean
> >> >> that the data doesn't already exist in the
> table.  It
> >> >> just means it's a new datafile.
> >> >> 
> >> >> ---mark
> >> >> 
> >> >> 
> >> >> --- Douglas Brown <[EMAIL PROTECTED]> wrote:
> >> >> > Question....Why are you comparing the data
> before
> >> >> > updating? If the data that you are updating
> with is
> >> >> > the same data, it would not matter and also
> if their
> >> >> > is new data then that would be adjusted
> accordingly.
> >> >> > Maybe I'm confused.
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > "Success is a journey, not a destination!!"
> >> >> > 
> >> >> > 
> >> >> > 
> >> >> > Doug Brown
> >> >> > ----- Original Message ----- 
> >> >> > From: "Kelly Matthews" <[EMAIL PROTECTED]>
> >> >> > To: "SQL" <[EMAIL PROTECTED]>
> >> >> > Sent: Friday, March 15, 2002 11:05 AM
> >> >> > Subject: Re: need help with large database
> update
> >> >> > 
> >> >> > 
> >> >> > > why not do it via a stored procedure...
> much
> >> >> > quicker...
> >> >> > > 
> >> >> > > ---------- Original Message
> >> >> > ----------------------------------
> >> >> > > From: Mark Warrick <[EMAIL PROTECTED]>
> >> >> > > Reply-To: [EMAIL PROTECTED]
> >> >> > > Date:  Fri, 15 Mar 2002 11:02:51 -0800
> (PST)
> >> >> > > 
> >> >> > > >Hello All,
> >> >> > > >
> >> >> > > >I have a database of about 85,000 records
> which
> >> >> > has 15
> >> >> > > >columns.  I need to update this database
> with a
> >> >> > > >datafile that contains the same schema
> and just
> >> >> > as
> >> >> > > >many records.
> >> >> > > >
> >> >> > > >For each row that is going to be
> imported, I have
> >> >> > to
> >> >> > > >compare all 15 columns of data for each
> row
> >> >> > against
> >> >> > > >all 15 columns of each row in the
> database to see
> >> >> > if
> >> >> > > >there's a match, and if not, then import
> the new
> >> >> > data.
> >> >> > > >
> >> >> > > >Every query I've written with ColdFusion
> to do
> >> >> > this
> >> >> > > >seems to kill the server.  Even comparing
> one row
> >> >> > of
> >> >> > > >data seems to put extreme load on the
> server.
> >> >> > > >
> >> >> > > >Anyone got a clue as to how I might
> accomplish
> >> >> > this
> >> >> > > >goal?  I may be willing to pay somebody
> to do
> >> >> > this.
> >> >> > > >
> >> >> > > >---mark
> >> >> > > >
> >> >> > > >
> >> >> > >
> >> >> >
> >__________________________________________________
> >> >> > > >Do You Yahoo!?
> >> >> > > >Yahoo! Sports - live college hoops
> coverage
> >> >> > > >http://sports.yahoo.com/
> >> >> > > >
> >> >> > > 
> >> >> >
> >> >> 
> >> >> 
> >> >
> >> 
> >
>

______________________________________________________________________
Get the mailserver that powers this list at http://www.coolfusion.com
Archives: http://www.mail-archive.com/[email protected]/
Unsubscribe: http://www.houseoffusion.com/index.cfm?sidebar=lists

Reply via email to