[EMAIL PROTECTED] wrote,


"jose isaias cabrera" <[EMAIL PROTECTED]> wrote:
Greetings!

I have this scenario...

6 users with local dbs
1 network db to backup all those DBs and to share info.

Every local DB unique record id based on the network DB.  So, before each
user creates a new record, the tool asks the network ID for the next
available sequential record and creates a new record on the local DB using
that unique id from the network DB.

I think I would want to avoid going to the remote DB
every time an insert was needed.  I'd work around that
in one of two ways:

 (1) Each client can check out a range of unique IDs
     in advance and use new IDs from its assigned
     range.

 (2) Clients select unique IDs at random. If the space
     of random IDs is large enough, and you have
     a good source of randomness, the probability
     of collisions can be made vanishingly small.

Didn't think about this.  Hmmmm...


The question is, what is the fastest way to UPDATE the main DB? Right now,
what I am doing is a for each record and UPDATE all the values where
id=<uniqueID>.  Is there a faster way?


If the id is declared to be UNIQUE in the master database,
then you can just do:

  INSERT OR IGNORE INTO masterdb SELECT * FROM localdb;

That statement will copy into "masterdb" all records in
"localdb" that are not already in "masterdb".

Yep, this is the way to go, for now. Though, I am going to think about the statement above. It could be done.

thanks,

josé


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to