Kent wrote: > I was considering the "one" side of a many to one, where setting the > backref would be trivial during merge since we already have the object > and know what it is. However, I see how that would be > inconsistent ... "why does the orm set this only in one direction?" > would be the question. > > Anyway, 95% of the time it doesn't matter because, as you point out, > the object is in the identity map. The problem I ran into is one > where the primary join is goofy and therefore get() could not be > utilized for grabbing the backref... instead it had to refetch it from > the database (and it seemed silly to me since it knew the backref at > merge() time). (Plus, it was worse in my case because there are a few > points where I need to transiently turn off autoflush, and this was > during one of them, so it was losing data changes I think when it > looked up the backref object.)
specifically the reason the backrefs don't fire in this example, when the collection is sent during merge() to the parent, the fact that the object as pulled from the DB already has this identical collection of identities, there is no net change and events are not fired, this is how collections.bulk_replace() currently works. The merge() process has checks for "recursive" calls to prevent endless traversals around cycles, by placing states already "seen" in a set. There is an extra usage of this set by relationship() that places a state + property key in the "seen" list as a performance enhancement, such that backrefs won't be forced to load unnecessarily during a merge(). There currently aren't tests to evaluate the expense saved by this call. If the recursive check is taken out, then the test below traverses A.bs as well as each B.a and the backrefs are populated. Perhaps if the call does save on SQL calls, if it could be made more intelligent such that already-loaded collections are reused. I will add a ticket to evaluate, but I am way behind on tickets so its not likely anything will happen on this soon. http://www.sqlalchemy.org/trac/ticket/2221 The event that most closely matches what you'd want here would be the "load()" event, i.e. the event that fires when an object is loaded from the DB and initial attribute population has proceeded. There's no plan for an internal "merge of individual instance" right now - it would need to be driven by use cases (workarounds of behavior like this aren't great use cases for an API). An event around "merge()" in the aggregate isn't an internal process so I tend to not add events around those (since you're the one who calls merge()). from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) bs = relationship("B",backref="a") class B(Base): __tablename__ = 'b' id = Column(Integer, primary_key=True) a_id = Column(Integer, ForeignKey('a.id')) e = create_engine('sqlite://', echo=True) Base.metadata.create_all(e) s = Session(e) a = A(id=1, bs=[B(id=1), B(id=2)]) s.add(a) s.commit() s = Session(e) a = A(id=1, bs=[B(id=1), B(id=2)]) a2 = s.merge(a) # comment out lines 737-741 of orm/properties.py # to have these pass assert 'a' in a2.__dict__['bs'][0].__dict__ assert 'a' in a2.__dict__['bs'][1].__dict__ -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
