yeah i can tell you exactly why, its this:

>     session.flush([object1])

flushing with a list of explicit objects is not very reliable for this
reason.   if we look at the relationship:

> BranchTbl_Mapper.add_property(
>     'Parent_BranchTbl',
>     relation(
>         BranchTbl,
>         primaryjoin=branches_table.c.parent==branches_table.c.branchid,
>         foreignkey=branches_table.c.branchid,
>         uselist=False
>     )
> )

it means that when i do an assignment like this:

>     object1.Parent_BranchTbl = object2

which object is actually being modified from a column-attributes point
of view ?  its object2, whose "parent" attribute is going to be updated
with the value of object1's primary key (not to mention the previous
instance which needs its parent set to NULL).

currently, thats how flush([someobj]) works.  i think to address this
kind of scenario would require some new kind of "cacade" rule, i.e.
"flush" cascade, which means "add child instances to the flush()
operation if a list of instances was sent".  this is something
hibernate doesn't have, because hibernate doesnt have the option to
send a list of instances to the flush() operation.

the unit of work code is now clean enough that this might be the time
that we can start to add a capability like this.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sqlalchemy
-~----------~----~----~----~------~----~------~--~---

Reply via email to