This seems like it would be a very common scenario, but it's got me
stumped and feeling a bit stupid at the moment - I would appreciate
anyone helping to point me in the right direction.
I'm using the ORM for a many-to-many relationship, for which over time
I need to be able to prune individual associations from items that may
(obviously) have several currently active. But I can't seem to figure
out how to do it through ORM/session/object actions without fully
purging the object in question, also removing it from associations I
don't want to touch. It seems far too fragile for what I would think
would be very common needs of a many-to-many relationship.
For example, here's a stripped down many-to-many setup for jobs which
contain files. Files may be shared amongst jobs, thus the
many-to-many relationship.
- - - - - - - - - - - - - - - - - - - - - - - - -
from sqlalchemy import *
from sqlalchemy.orm import *
meta = BoundMetaData('sqlite:///')
jobs = Table('jobs', meta,
Column('id', Integer, primary_key=True),
Column('name', String))
files = Table('files', meta,
Column('id', Integer, primary_key=True),
Column('name', String))
jobs_files = Table('jobs_files', meta,
Column('job_id', Integer, ForeignKey('jobs.id')),
Column('file_id', Integer, ForeignKey('files.id')))
class Job(object):
pass
class File(object):
pass
mapper(File, files)
mapper(Job, jobs,
properties = { 'files': relation(File, lazy=False,
backref=backref('jobs', lazy=False),
secondary=jobs_files) })
def setup():
meta.create_all()
s = create_session()
f1 = File()
f1.name = 'File 1'
f2 = File()
f2.name = 'File 2'
fc = File()
fc.name = 'File Common'
j1 = Job()
j1.name = 'Job 1'
j2 = Job()
j2.name = 'Job 2'
s.save(j1)
s.save(j2)
j1.files.extend([f1, fc])
j2.files.extend([f2, fc])
s.flush()
- - - - - - - - - - - - - - - - - - - - - - - - -
I've used eager loading for the relationships since that mimics my
actual code, but I believe the issues hold even with lazy loading.
(No loading, lazy=None, is whole other thing as that seems very
fragile with respect to relations since if you don't have a particular
relation loaded when you modify an instance SA doesn't know to
follow. Not that I really blame SA in that case I suppose.)
Anyway, assuming that setup() has been called, these are the sort of
activities that have me stumped:
* I want to remove the association between "File Common" and "Job 1"
but without affecting "Job 2".
If I session.delete() the fc instance directly, SA purges the file
completely, including links to both jobs. I can understand SA
thinking I want the file completely gone in this scenario.
But if I remove the fc instance from the relation list (files) from
either job, SA also fully purges fc, including the link to the other
job. This includes the case of deleting one of the jobs if I have
the cascade on the files relation including "delete". This would
seem to prevent me from using a delete cascade, since then deleting
any job would remove files it contains from all other jobs also
containing those files which sort of defeats the purpose (at least
for me) of the many to many relationship.
The only case where I'd want the fc instance in the database to be
fully purged would be if I was deleting the last association with
any jobs, something I thought delete-orphan would handle.
* Providing I can resolve the prior point, I was hoping to have a way
that would let me remove a job completely, including any associated
files, but have the file records only pruned if they did not belong
to any other job.
Originally I had tried including "delete" and "delete-orphan" in the
cascade rules would accomplish this. But I found that the delete
cascade triggered behavior as above - fully removing all files even
if they still belong to other jobs. If I only leave the
delete-orphan cascade, deleting the parent job has no impact on the
files, leaving stranded files and the old associations around.
Most of the many-to-many examples I found tend to use "all,
delete-orphan" as the cascade rules, but at least in my experience
that makes it dangerous to delete, in my scenario, job instances as it
purges all contained files even if they are still referenced in other
jobs. And you'd never know later that they were ever part of the
other jobs (E.g., they are cleanly removed from all jobs during the
flush). While I can understand why assuming a simple iteration over
child container objects during the delete cascade, in a practical
sense that doesn't seem very useful for many-to-many relationships
(as opposed to one-to-one/one-to-many).
Am I just missing something blindingly obvious, or should I be trying
to manage the many-to-many relationships, at least deleting, in some
other way? Should I be interacting with the relationship table
directly (but then, how do I remove the files from the contained
project objects without triggering the deletion behavior I don't want)?
Thanks for any suggestions.
-- David
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---