This is an admittedly wonky situation. And I don't have much code to offer
because I think the problem is conceptual on my part, rather than
code-related.
We're basically using google spreadsheets as a quick and dirty interface
for certain types of data entry. I know how terrible this is. Please just
bare with me. I have to work with what this is now while I make it better.
Each morning, we take whatever is in the google spreadsheet and update one
column in a table in postgress. But postgress is being updated by a
different service in real-time. The particular table that we are looking at
here is has an odd behavior. If someone runs a delete operation from the
GUI, it actually deletes the row instead of doing something sane like
flagging it as deleted.
So we have some situations where there are more rows in the previous day's
spreadsheet than there are in the postgress table, and I *think* that's
what is causing the error. I get a traceback that looks like this:
File "/opt/smsport/assessment_goal_sync.py", line 51, in get_current_goals
self.session.bulk_update_mappings(Assessment, current_vals)
File
"/data/anaconda/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
line 2329, in bulk_update_mappings
self._bulk_save_mappings(mapper, mappings, True, False, False, False)
File
"/data/anaconda/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
line 2351, in _bulk_save_mappings
transaction.rollback(_capture_exception=True)
File
"/data/anaconda/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py",
line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File
"/data/anaconda/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
line 2343, in _bulk_save_mappings
isstates, update_changed_only)
File
"/data/anaconda/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py",
line 121, in _bulk_update
bookkeeping=False)
File
"/data/anaconda/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py",
line 716, in _emit_update_statements
(table.description, len(records), rows))
StaleDataError: UPDATE statement on table 'assessment' expected to update
786 row(s); 785 were matched.
My code that is causing the problem is literally this:
self.session.bulk_update_mappings(Assessment, current_vals)
self.session.commit()
Where the argument current_vals is a list of dictionaries that contain
primary key and the value to be updated.
This works in most cases. But it fails if a record in the database has been
deleted.
So I have 3 questions:
Am I interpreting this error correctly?
If so, is there some way I can work around this to tell
bulk_update_mappings to ignore that situation and just update what is
matched?
Is there a better, more idiomatic way to handle this situation?
Thanks!
-andrew
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.