Ok, I will try the association object, it looks better !
Another question, I'm using SQLAlchemy with mod_python. That means that
a Python interpreter is loaded in each apache process (in prefork mode).
In my application I use a global session object per process (which is
not threadsafe, but.. I use the prefork mode at the moment :)), so I
have at least 5 sessions running in parallel (at least 5 Apache
processes). I noticed that if I don't do a session.expunge(an_object)
after a session.flush(), some older instances of an_object live in other
processes (for example I have to do some refresh if I don't use
session.expunge).
This is quite logical I think, a process cannot know what's happened in
another processes (different Python interpreters), but I wondered if the
way I'm doing is the "best" way ?
Thanks !
Julien
Michael Bayer wrote:
> SQLAlchemy cant coerce a separate "INSERT" and a "DELETE" operation
> into a single UPDATE statement. By creating a new InvasiveName()
> object and essentially giving it the same primary key as another
> InvasiveName() instance which is marked to be deleted, youre violating
> the "ORM expects primary keys to be immutable" requirement (even
> though Jonathan Ellis has recently pointed us all to a set of articles
> that argue against PK immutability).
>
> so your best bet would be take the single InvasiveName() object and
> move it over, instead of creating a second one.
>
> it might be worth it on my end to add some extra checks for this
> condition.
>
> Also, this might not be what you want, but your relationship is very
> similar to an "association" relationship like this:
>
> mapper(Invasive, invasives, properties = {
> 'names' : relation(InvasiveName, backref='invasive',
> cascade='all,
> delete-orphan', association=Language)
> }
> )
>
> mapper(InvasiveName, invasive_names, properties={
> 'language': relation(Language,
> backref=backref('invasive_names', cascade='all, delete-orphan'))
> })
>
> mapper(Language, languages)
>
> with the assocaition object SA will try to work out the duplicate
> InvasiveName() objects for you.
>
> On Jul 16, 2006, at 7:41 AM, Julien Cigar wrote:
>
>> Hello list !
>>
>> I have a little problem with the cascade property I think. Some rows are
>> not deleted in the FK table.
>>
>> My tables definition looks like :
>>
>> create table invasives (
>> id serial not null,
>> scientific_name varchar(200) not null,
>>
>> constraint pk_invasive
>> primary key (id),
>> );
>>
>> create table languages (
>> id serial not null,
>> iso_code char(2) not null,
>> language varchar(100) not null,
>>
>> constraint pk_language
>> primary key(id)
>> );
>>
>> create table invasive_names (
>> name varchar(200) not null,
>> language_id integer not null,
>> invasive_id integer not null,
>>
>> constraint pk_invasive_names
>> primary key (language_id, invasive_id),
>>
>> constraint fk_language
>> foreign key (language_id) references languages(id),
>>
>> constraint fk_invasive
>> foreign key (invasive_id) references invasives(id)
>> );
>>
>> class InvasiveName(object):
>> pass
>>
>> class Invasive(object):
>> pass
>>
>> class Language(object):
>> pass
>>
>> mapper(InvasiveName, invasive_names)
>>
>> mapper(Invasive, invasives, properties = {
>> 'names' : relation(InvasiveName, backref='invasive', cascade='all,
>> delete-orphan')
>> }
>> )
>>
>> mapper(Language, languages, properties = {
>> 'invasive_names' : relation(InvasiveName, backref='language',
>> cascade='all, delete-orphan')
>> }
>> )
>>
>> (invasives, invasive_names and languages are just the Table()
>> definitions of the tables, I haven't put them for visibility)
>>
>>
>> When I do :
>> =====
>> query_invasive = session.query(Invasive)
>> invasive_object = query_invasive.get(153)
>> invasive_object.names = []
>> session.flush()
>> =====
>> This work !!, the rows in invasive_names are deleted.
>>
>> However if I do :
>> =====
>> query_invasive = session.query(Invasive)
>> query_language = session.query(Language)
>>
>> invasive_object = query_invasive.get(153)
>> invasive_object.names = []
>>
>> invasive_name_object = InvasiveName()
>> language_object = query_language.get(1)
>>
>> invasive_name_object.language = language_object
>> invasive_name_object.name = 'foo bar'
>>
>> invasive_object.names.append(invasive_name_object)
>> session.flush()
>> =====
>>
>> This doesn't work if the row already exists in the invasive_names
>> table :
>>
>> LOG: statement: INSERT INTO invasive_names (name, language_id,
>> invasive_id) VALUES ('foo bar', 1, 153)
>> ERROR: duplicate key violates unique constraint "pk_invasive_names"
>> LOG: statement: ABORT
>>
>> It should be UPDATE statement instead of INSERT statement ..
>>
>> Any idea what I'm doing wrong ? It should be something with the cascade
>> property I guess ..
>>
>> Thanks,
>> Julien
>>
>>
>>
>>
>>
>> -------------------------------------------------------------------------
>>
>> Using Tomcat but need to do more? Need to support web services,
>> security?
>> Get stuff done quickly with pre-integrated technology to make your
>> job easier
>> Download IBM WebSphere Application Server v.1.0.1 based on Apache
>> Geronimo
>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
>> _______________________________________________
>> Sqlalchemy-users mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/sqlalchemy-users
>
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Sqlalchemy-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/sqlalchemy-users