Ok I figured out the cause, but not the solution. I am using a mutable type 
for hstore columns. I have a UserDefinedType for Hstore that just passes 
everything through to psycopg2's hstore type:

class HStore(UserDefinedType):
    """ SQLAlchemy type that passes through values to be handled by a 
psycopg2 
        extension type. 
    """
    type_name = 'HSTORE'
    
    def get_col_spec(self):
        return self.type_name
    
    def bind_processor(self, dialect):
        return None
    
    def result_processor(self, dialect, coltype):
        return None
    
    def is_mutable(self):
        return True
    
    def copy_value(self, value):
        return copy.copy(value)


I used the MutationDict 
from http://docs.sqlalchemy.org/en/rel_0_7/orm/extensions/mutable.html :

class MutationDict(Mutable, dict):
    
    @classmethod
    def coerce(cls, key, value):
        "Convert plain dictionaries to MutationDict."

        if not isinstance(value, MutationDict):
            if isinstance(value, dict):
                return MutationDict(value)

            # this call will raise ValueError
            return Mutable.coerce(key, value)
        else:
            return value

    def __setitem__(self, key, value):
        "Detect dictionary set events and emit change events."

        dict.__setitem__(self, key, value)
        self.changed()

    def __delitem__(self, key):
        "Detect dictionary del events and emit change events."

        dict.__delitem__(self, key)
        self.changed()


The column definition I use is:

some_attrs = Column(MutationDict.as_mutable(HStore))

Then in the actual transaction I am copying one object with that column 
definition to another object with the same definition:

newobject.some_attrs = other_object.some_attrs


If I comment out that line there is only a single flush at the commit time. 
 

It looks correct according to the examples I have seen, but if you know why 
it keeps setting them as dirty please let me know.

On Tuesday, December 4, 2012 3:16:24 PM UTC-5, Michael Bayer wrote:
>
>
> On Dec 4, 2012, at 3:04 PM, Jason wrote:
>
> After upgrading to SQLAlchemy 0.7.9 I know receive an error  "FlushError: 
> Over 100 subsequent flushes have occurred within session.commit() - is an 
> after_flush() hook creating new objects?" which is was introduced by 
> http://docs.sqlalchemy.org/en/latest/changelog/changelog_07.html#change-75a53327aac5791fe98ec087706a2821in
>  the changelog. 
>
> I don't have any after_flush event handlers.  I do have a before_flush 
> event handler that changes the state of a related object, but that doesn't 
> sound like what the error is talking about.
>
> How can I debug this further? I am doing this within a Pyramid 
> application, so I am somewhat removed from the commit logic.
>
>
> this error traps the condition that dirty state remains in the Session 
> after a flush has completed.    This is possible if an after_flush hook has 
> added new state, or perhaps also if a mapper.after_update/after_insert etc 
> hook, or even a before_update/before_insert has modified the flush plan, 
> which is not appropriate in any case.
>
> the best way is to actually create an after_flush() hook with a 
> "pdb.set_trace()" in it; in there, you'd just look at session.new, 
> session.dirty, and session.deleted to ensure that they are empty.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/_yUDl4TxbdoJ.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to