Hi Mark, I was way in too much of a hurry there. I was responding
to Matthew's comments about the data. You are right, I was not
explicit in my comments, but I was assuming that the key(s)
were in _your_ table, not the data you are getting. From looking
at the DOC file you posted, you do have other data in the same
row/table you do not want to lose that is correlated with the 
uploaded data . I can understand that.

I have a similar situation with people uploading spreadsheets
into a database, and the spreadsheets may or may not have a
customer reference number, so I really cannot rely on their
data to have a key as well. I don't have 85000 records every month,
but I have a few dozen customers that upload spreadsheets that 
have from 15 to 5500 rows of data. Since my database does contain
a unique item number (PK) for each row they have updated, I don't
want to throw the old ones away and change the item number if
the/a matching record does not change.

So it is a similar situation that we both have. Since I am using Oracle
for my DB, I cannot directly help you, but it sounds like there are
some eager people out there!

It would be interesting to know the direction you ended up going
when it is all said and done if you can share.

Regards,

Steve

At 02:23 PM 3/15/2002 -0800, Mark Warrick wrote:
>Hi Steve,
>
>Yes, thank you.  I'm totally aware of database
>normalization.  However, the source file is something 
>I have no control over and there is no primary key in
>it.
>
>---mark
>
>--- Stephen Garrett <[EMAIL PROTECTED]> wrote:
>> I guess I am in too much of a hurry today, sorry.
>> 
>> Good DB organization practice would dictate that if
>> the data set
>> that you are dealing with has no primary key, then a
>> primary key
>> should be added as a column when creating the data
>> table that will
>> contain the data set (records of data). Hence the
>> SEQ comment.
>> 
>> Now other portions of an application may make use of
>> this SEQ key,
>> but perhaps not. If they do, then it does not seem
>> proper to me
>> to always delete the previous row, with it's unique
>> SEQ, and recreate
>> the exact same row of data with a different SEQ (ID
>> number).
>> 
>> Anyway, those were my thoughts, and hope that helps.
>> 
>> Steve
>> 
>> At 03:28 PM 3/15/2002 -0500, Cottell, Matthew wrote:
>> >Forgive me if I don't know the terminology,
>> >I'm not a full time db developer.
>> >But a SEQ Key sounds like a primary key.
>> >And it was already stated that there is no primary
>> key.
>> >
>> >And I'm not sure why a row wouldn't stay intact.
>> >Could you elaborate?
>> >
>> >Matt
>> >
>> >
>> >> -----Original Message-----
>> >> From:     Stephen Garrett [SMTP:[EMAIL PROTECTED]]
>> >> Sent:     Friday, March 15, 2002 3:14 PM
>> >> To:       SQL
>> >> Subject:  RE: need help with large database update
>> >> 
>> >> I think this is a good idea, but doesn't it
>> depend upon whether
>> >> you need to keep an original row intact, as it
>> probably has
>> >> a numeric SEQ Key assigned to it?? 
>> >> 
>> >> How could you do that with the distinct method,
>> and keep the original
>> >> row in place? (eg it had an assigned part number
>> or sum such)
>> >> 
>> >> Steve
>> >> 
>> >> At 03:05 PM 3/15/2002 -0500, Cottell, Matthew
>> wrote:
>> >> >Couldn't you insert the all the records in one
>> fell swoop.
>> >> >Then perform a Select Distinct on all the rows?
>> >> >Insert those records into a new table, and
>> Voila, your done.
>> >> >
>> >> >As I understand what you're saying,
>> >> >either the row is a complete match, or its a
>> completely unique record.
>> >> >There's no instances of some of the data being
>> the same and having to
>> >> choose
>> >> >which record to include.
>> >> >Or am I missing something?
>> >> >
>> >> >Matt
>> >> >
>> >> >
>> >> >> -----Original Message-----
>> >> >> From:  Mark Warrick [SMTP:[EMAIL PROTECTED]]
>> >> >> Sent:  Friday, March 15, 2002 2:45 PM
>> >> >> To:    SQL
>> >> >> Subject:       Re: need help with large database
>> update
>> >> >> 
>> >> >> I have to compare each row of new data with
>> existing
>> >> >> data because there are no primary keys.  So I
>> can't
>> >> >> just append new data into the table because
>> then I
>> >> >> might have duplicate data in the table.
>> >> >> 
>> >> >> Just to be clear, "New" data doesn't
>> necessarily mean
>> >> >> that the data doesn't already exist in the
>> table.  It
>> >> >> just means it's a new datafile.
>> >> >> 
>> >> >> ---mark
>> >> >> 
>> >> >> 
>> >> >> --- Douglas Brown <[EMAIL PROTECTED]> wrote:
>> >> >> > Question....Why are you comparing the data
>> before
>> >> >> > updating? If the data that you are updating
>> with is
>> >> >> > the same data, it would not matter and also
>> if their
>> >> >> > is new data then that would be adjusted
>> accordingly.
>> >> >> > Maybe I'm confused.
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > "Success is a journey, not a destination!!"
>> >> >> > 
>> >> >> > 
>> >> >> > 
>> >> >> > Doug Brown
>> >> >> > ----- Original Message ----- 
>> >> >> > From: "Kelly Matthews" <[EMAIL PROTECTED]>
>> >> >> > To: "SQL" <[EMAIL PROTECTED]>
>> >> >> > Sent: Friday, March 15, 2002 11:05 AM
>> >> >> > Subject: Re: need help with large database
>> update
>> >> >> > 
>> >> >> > 
>> >> >> > > why not do it via a stored procedure...
>> much
>> >> >> > quicker...
>> >> >> > > 
>> >> >> > > ---------- Original Message
>> >> >> > ----------------------------------
>> >> >> > > From: Mark Warrick <[EMAIL PROTECTED]>
>> >> >> > > Reply-To: [EMAIL PROTECTED]
>> >> >> > > Date:  Fri, 15 Mar 2002 11:02:51 -0800
>> (PST)
>> >> >> > > 
>> >> >> > > >Hello All,
>> >> >> > > >
>> >> >> > > >I have a database of about 85,000 records
>> which
>> >> >> > has 15
>> >> >> > > >columns.  I need to update this database
>> with a
>> >> >> > > >datafile that contains the same schema
>> and just
>> >> >> > as
>> >> >> > > >many records.
>> >> >> > > >
>> >> >> > > >For each row that is going to be
>> imported, I have
>> >> >> > to
>> >> >> > > >compare all 15 columns of data for each
>> row
>> >> >> > against
>> >> >> > > >all 15 columns of each row in the
>> database to see
>> >> >> > if
>> >> >> > > >there's a match, and if not, then import
>> the new
>> >> >> > data.
>> >> >> > > >
>> >> >> > > >Every query I've written with ColdFusion
>> to do
>> >> >> > this
>> >> >> > > >seems to kill the server.  Even comparing
>> one row
>> >> >> > of
>> >> >> > > >data seems to put extreme load on the
>> server.
>> >> >> > > >
>> >> >> > > >Anyone got a clue as to how I might
>> accomplish
>> >> >> > this
>> >> >> > > >goal?  I may be willing to pay somebody
>> to do
>> >> >> > this.
>> >> >> > > >
>> >> >> > > >---mark
>> >> >> > > >
>> >> >> > > >
>> >> >> > >
>> >> >> >
>> >__________________________________________________
>> >> >> > > >Do You Yahoo!?
>> >> >> > > >Yahoo! Sports - live college hoops
>> coverage
>> >> >> > > >http://sports.yahoo.com/
>> >> >> > > >
>> >> >> > > 
>> >> >> >
>> >> >> 
>> >> >> 
>> >> >
>> >> 
>> >
>>
>
>
______________________________________________________________________
This list and all House of Fusion resources hosted by CFHosting.com. The place for 
dependable ColdFusion Hosting.
Archives: http://www.mail-archive.com/[email protected]/
Unsubscribe: http://www.houseoffusion.com/index.cfm?sidebar=lists

Reply via email to