> the most idiomatic way to handle this is to merge the objects in:
>
> obj = session.merge(existing_object)
>
> this will emit a SELECT for the existing row, then copy the state of
> "existing_object" to an object located for that primary key, if found.
> It ensures that the correct choice of "pending" or "persistent" is made
> depending on if the row already exists.
>
Thanks for your response Michael. It wasn't clear from my original post,
but I am using merge to copy from PROD to DEV. My merge function looks
something like this (simplified, but I'm copying multiple entities)
session_dest.merge(entity)
session_dest.commit()
session_dest.expunge_all() # large object graphs were causing me to run
low on memory, so I merge them one at a time and then clear the local cache.
So, assuming DEV has a single record {acct_id: 1, env_id: 1} and I'm
copying a record {acct_id: 1, env_id: 4} from PROD, it incorrectly thinks
that this record should be INSERTed, when in fact there is a constraint
(acct_id must be unique) that prevents this.
The more I evaluate this, the more I think that correctly modeling the
unique constraint will fix my problem. Then my before_update handler would
function but would properly UPDATE the record.
Shawn
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To view this discussion on the web visit
https://groups.google.com/d/msg/sqlalchemy/-/9azvATgVSsoJ.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.