I was considering the "one" side of a many to one, where setting the
backref would be trivial during merge since we already have the object
and know what it is.  However, I see how that would be
inconsistent ... "why does the orm set this only in one direction?"
would be the question.

Anyway, 95% of the time it doesn't matter because, as you point out,
the object is in the identity map.  The problem I ran into is one
where the primary join is goofy and therefore get() could not be
utilized for grabbing the backref... instead it had to refetch it from
the database (and it seemed silly to me since it knew the backref at
merge() time).  (Plus, it was worse in my case because there are a few
points where I need to transiently turn off autoflush, and this was
during one of them, so it was losing data changes I think when it
looked up the backref object.)

I worked around this programmatically, but can you recommend a hook or
event where I could place some code to do this for certain cases
(specifically many to one or one to one)?

As of 0.6.4 there is no API hook for "after merge", have you ever
considered such or were you possibly even planning such?

Thanks again,
Kent


On Jul 6, 5:07 pm, Michael Bayer <[email protected]> wrote:
> A persistent object doesn't populate an unloadedbackrefon a forward set 
> event.   This is for efficiency so that when you do something like:
>
> for b in Session.query(B):
>         b.a = some_a
>
> it doesn't spend time loading the "bs"collectionof "some_a", which if you had 
> a lot of different "some_a" would take a lot of time.   The other direction:
>
> for a in Session.query(A):
>     a.bs.append(some_b)
>
> if you were to access "some_b.a", the lookup is from the identity map since 
> some_b is present.     There is a step that ensures that the "change" from 
> thebackrefis present in the "history" of the other side, but this is done in 
> such a way as to not force acollectionor reference load.
>
> I frequently forget the details of behaviors like these since 90% of them 
> have been nailed down years ago, so if you try the following test case, 
> you'll see no SQL is emitted after "2.----".
>
> Also I have to run out so I may be forgetting some other details about this, 
> I'll try to take a second look later.
>
> from sqlalchemy import *
> from sqlalchemy.orm import *
> from sqlalchemy.ext.declarative import declarative_base
> Base = declarative_base()
>
> class A(Base):
>     __tablename__ = 'a'
>
>     id = Column(Integer, primary_key=True)
>     bs = relationship("B",backref="a")
>
> class B(Base):
>     __tablename__ = 'b'
>
>     id = Column(Integer, primary_key=True)
>     a_id = Column(Integer, ForeignKey('a.id'))
>
> e = create_engine('sqlite://', echo=True)
> Base.metadata.create_all(e)
>
> s = Session(e)
>
> s.add(A(id=1, bs=[B(id=1), B(id=2)]))
> s.commit()
> s.close()
>
> a = A(id=1, bs=[B(id=1), B(id=2)])
>
> print "1. -----------------------------------------"
> a2 = s.merge(a)
>
> print "2. -----------------------------------------"
>
> for b in a2.bs:
>     assert b.a is a2
>
> On Jul 6, 2011, at 4:24 PM, Kent wrote:
>
>
>
>
>
>
>
> > If I merge() an object with acollectionproperty, thebackref'sare
> > not set as they would be if I had assigned thecollectionto the
> > object.
>
> > I expected that this should occur.  Is there rationale for notsetting
> >backref'sor would it be possible to make this change?
>
> > Thanks,
> > Kent
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "sqlalchemy" group.
> > To post to this group, send email to [email protected].
> > To unsubscribe from this group, send email to 
> > [email protected].
> > For more options, visit this group 
> > athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to