[sqlalchemy] Re: Inconsistent results in session.flush()

2007-03-19 Thread King Simon-NFHD78

Michael Bayer wrote:
 
 On Mar 17, 2007, at 11:39 AM, Simon King wrote:
 
 
  I had assumed that post_update was only necessary when you are 
  inserting two rows that are mutually dependent, but in this case I 
  wasn't inserting either a ReleaseLine or a Label. I suppose 
  post_update can actually have a knock-on affect across the whole 
  dependency graph.
 
 Yes, you are correct.  I did the requisite three hours of 
 staring and uncovered the actual condition that led to this 
 problem, which is definitely a bug.
 
 Because the second flush() did not require any update 
 operation to either the ReleaseLine or the Label object, the 
 circular dependency graph that would be generated for them 
 just gets thrown away, but that also threw away a processor 
 thats supposed to be setting the release_line_id column on 
 your Branch table.  so the fix is, if the circular 
 dependency graph doesnt get generated for a particular cycle 
 between mappers, or if a particular mapper within the cycle 
 isnt included in the circular dependency graph, make sure 
 the flush task for that mapper still gets tacked on to the 
 returned structure so that it can operate on other things 
 external to the cycle, in this case your Branch.  the Branch 
 is related to the ReleaseLine/Label via a many-to-one, which 
 was not as stressed in the unit tests so its harder for this 
 issue to pop up (also requires the two separate flushes...so 
 very hard to anticipate this problem).
 
 So you can take the post_update out and update to rev 2424.
 

Yep. I can definitely say that I wouldn't have figured that one out.
Thanks again for all your help,

Cheers,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Inconsistent results in session.flush()

2007-03-19 Thread svilen


 Michael Bayer wrote:
  So you can take the post_update out and update to rev 2424.


i have a problem with this 2424, before that was ok.
As i can't separate a short case now, here the conditions:

multi-table-inheritance, polymorphic. Nothing else too fancy. 
table_Entity has primary db_id, hand-made/sequence obj_id and other 
data;
While populating the database:
...
* SA: INFO INSERT INTO Entity (disabled, obj_id, atype) VALUES 
(?, ?, ?)
* SA: INFO [0, 1, 'Person']
* SA: INFO INSERT INTO Person (name_id, db_id) VALUES (?, ?, )
* SA: INFO [2, 1]

and now 2423 does: 
* SA: INFO COMMIT

while 2424 does:
* SA: INFO INSERT INTO Entity (disabled, obj_id, db_id, atype) 
VALUES (?, ?, ?, ?)
* SA: INFO [0, 1, 1, 'Person']
* SA: INFO ROLLBACK


for whatever reason it decides that the same object is dirty AND new, 
and attempts to reinsert it once more.
Any idea? do u want logs?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Conflicting columns with secondaryjoin

2007-03-19 Thread Mel Collins

 Apparently my reply a couple of days ago didn't appear, hmm. No
matter, it was a load of old tosh anyway.

 The issue was resolved by my updating to 0.3.5. I was still using
0.3.3 at the time - should've checked that.
 However I then started experiencing strangeness in that SA was
attempting to insert two of the same row into the secondaryjoin table
(causing an IntegrityError). Having just noticed a mistake in my code,
it seems I was trying to put an instance of the secondaryjoin table
into the 'tags' list attribute, where an instance of the primaryjoin
should've gone...
 Is there some way to protect against/catch such things?
WrongObjectTypeYouEejitError? ;)

 Also, the tags attribute doesn't seem to have any problem with
flushing AFAICT, it works exactly as expected. :)

 Takk,
 - Mel C


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] eager load with polymorphic

2007-03-19 Thread che

Hi,
I tried to eager load some data from class that have reference to
polymorhic class. The object model is:  class Base inherited by both
Manager and Address, Managers has reference to Address - when I stop
here SA can eager loading the addresses of managers. The problem arise
when I add another class MailAddress that inherits Address - in this
case address of Managers are simply lazy loaded, despite  that it is
explicitly set to be eager. Maybe the problem is that somehow
polymorhic and eager cannot be combined?
please advice
TIA
Stefan


Below is example of code demonstrating the behavior:
from sqlalchemy import *
db_sqlite = create_engine( 'sqlite:///:memory:')
MODEL_WITH_MAIL_ADDR = 1   # 1 - WITH MAILADDRESS, 0 - WITHOUT
MAILADDRESS

def checkWith( db):
meta = BoundMetaData( db)
meta.engine.echo = 0
table_Base = Table( 'base', meta,
Column( 'discriminator', type= String, ),
Column( 'id', Integer, primary_key= True, ),
)
table_Manager = Table( 'manager', meta,
Column( 'address_id', Integer, ForeignKey( 'address.id', name=
'address_id_fk',), ),
Column( 'name', type= String, ),
Column( 'id', Integer, ForeignKey('base.id', name=
'manager_inh_fk'), primary_key= True, ),
)
table_Address = Table( 'address', meta,
Column( 'country', type= String, ), #for case without
mailaddress
Column( 'city', type= String, ),
Column( 'id', Integer, ForeignKey('base.id', name=
'address_inh_fk'), primary_key= True, ),
)
if MODEL_WITH_MAIL_ADDR:
table_Mail = Table( 'mailaddress', meta,
Column( 'country', type= String, ),
Column( 'id', Integer, ForeignKey('base.id', name=
'address_inh_fk'), primary_key= True,
)
class Base( object):
def set( me, **kargs):
for k,v in kargs.iteritems(): setattr( me, k, v)
return me
def __str__(me): return str(me.__class__.__name__)
+':'+str(me.name)
__repr__ = __str__
class Manager( Base):
def __str__(me):
res = Base.__str__(me)+':'+str(me.address.city)
if MODEL_WITH_MAIL_ADDR:
res += ':'+str(me.address.country)
return res
__repr__ = __str__
class Address( Base):   pass
class MailAddress( Address):   pass
meta.create_all()
def make_mappers():
propdic = {
'manager':  table_Base.join( table_Manager, onclause=
(table_Manager.c.id==table_Bas
'address':  table_Base.join( table_Address, onclause=
(table_Address.c.id==table_Bas
'base':
table_Base.select( table_Base.c.discriminator=='base'),
}
if MODEL_WITH_MAIL_ADDR:
propdic.update( { 'mailaddress':
table_Base.join( table_Address.join( table_Mail
, onclause=
(table_Mail.c.id==table_Address.c.id))
, onclause=
(table_Address.c.id==table_Base.c.id)
), })
join_Address = polymorphic_union( {
'address':  table_Base.join( table_Address, onclause=
(table_Address.c.id==table_Bas
'mailaddress':  table_Base.join( table_Address,
onclause=
(table_Address.c.id==table_Base.c.id)).join( table_Mail,
onclause=
(table_Mail.c.id==table_Address.c.id)
),
}, None)
join_Base = polymorphic_union( propdic, None)
mapper_Base = mapper( Base, table_Base, select_table=
join_Base
, polymorphic_on= join_Base.c.discriminator,
polymorphic_identity= 'base')
mapper_Manager = mapper( Manager, table_Manager,
properties= { 'address' : relation( Address, primaryjoin=
(table_Manager.c.address_id==t
, inherits= mapper_Base
, inherit_condition= (table_Manager.c.id==table_Base.c.id)
, polymorphic_identity='man'
)mapper_Address = mapper( Address, table_Address
, inherits= mapper_Base
, inherit_condition= (table_Address.c.id==table_Base.c.id)
, polymorphic_identity='adr'
)
if MODEL_WITH_MAIL_ADDR:
mapper_Address.select_table = join_Address
mapper_MailAddress = mapper( MailAddress, table_Mail
, inherits= mapper_Address
, inherit_condition=
(table_Mail.c.id==table_Address.c.id)
, polymorphic_identity='mai'
)

make_mappers()
if not MODEL_WITH_MAIL_ADDR:
MailAddress = Address
c = Manager().set( name= 'pencho')
d = Manager().set( name= 'torencho')
e = Manager().set( name= 'mnogoVojdMalkoIndianec')
f = MailAddress().set( city= 'varna', country= 'BG')
g = MailAddress().set( city= 'sofia', country= 'RU')
h = MailAddress().set( city= 'burga', country= 'US')
c.address, d.address, e.address = f, g, h
session = create_session()
session.save(c)
session.save(d)

[sqlalchemy] SELECT 'a fixed string', ...

2007-03-19 Thread Bertrand Croq

hi,

I am currently using sqlalchemy to build SQL queries and it's a fantastic 
tool! By now, I am looking for a way to build:

 SELECT 'a_fixed_string', atable.col1, atable.col2
 FROM atable

using the syntax:

 select([XXX, atable.c.col1, atable.c.col2])

but I don't know what to put at XXX.

Does anyone have a clue ?

-- 
Bertrand Croq,
Ingénieur Développement
___
Net-ng  Tel  : +33 (0)223 21 21 52
Immeuble Germanium  Fax  : +33 (0)223 21 21 60
80 av. des Buttes de CoesmesWeb   : http://www.net-ng.com
35000 RENNESe-mail: [EMAIL PROTECTED]
FRANCE


Ce message et tout document joint sont confidentiels. Toute diffusion ou
publication en est interdite. Si vous recevez ce message par erreur,
merci d'en avertir immédiatement l'expéditeur par e-mail et de
supprimer ce message et tout document joint.

This message and any attachment are confidential. Any use is prohibited
except formal approval. If you receive this message in error, please
notify the sender by return e-mail and delete this message and any
attachment from your system.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SQLite and ON CONFLICT clause?

2007-03-19 Thread Karlo Lozovina

Hi all,

browsing the SQLAlchemy docs I coudn't find any references to a
particular SQLite feature, ON CONFLICT clause (http://www.sqlite.org/
lang_conflict.html), is there any support for this in SA? What I want
to do is this, create my database as such:

CREATE TABLE songs(
id INTEGER PRIMARY KEY ON CONFLICT REPLACE,
name TEXT
);

Now, when I do a bunch of INSERTs without specifying 'id' field,
everything works as you would expect. But, when I issue a INSERT with
a 'id' that allready exists, SQLite automatically and automagicaly
does an UPDATE instead.

Now, this is very handy because I don't have to do a SELECT prior to
modifying a table. Is there a way to use this SQLite feature from SA?
Or somehow simulate it without too much gory details?

Thanks guys,
klm.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Firebird backend

2007-03-19 Thread Lele Gaifax

Hi all,

I attached a snapshot of my current source of the Firebird backend for 
SA to ticket #410. This version passes several more unittests, in 
particular wrt the supported types: BLOBs and Unicode should work now.

Since I'd be very sad knowing I'm breaking someone else project, I'd 
like to hear a voice from some other FB user, before asking Michael to 
accept this work.

There are mainly two areas that still need some work:
a) better handling of dialect 1 (a.k.a. Interbase 5.x) datatypes
b) correct the way the backend deals with case-sensitive

a) requires some way to reach the actual connection from the 
dialect.type_descriptor(), that needs to know which dialect the 
connection is talking to give the right answer.

b) is now alleviated, in dialect.has_table(), by using the LIKE operator 
instead of a straight = on the table name; more, the 
dialect.reflecttable() now initialize the case_sensitive flag on each 
column, but this clearly needs more work, mainly to better understand 
the whole logic behind case_sensitive on my part.

Please see:
http://www.sqlalchemy.org/trac/attachment/ticket/410/firebird.py

Thanks in advance for any feedback,
ciao, lele.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Quoting column names on table create?

2007-03-19 Thread Lele Gaifax

Karlo Lozovina wrote:
 I was just wondering, why does SA quote column names that have mixed
 case in them, and leaves them unquoted for lowercase column names?

Because in general SQL engines tend to be case-insensitive, that is, 
under most of them the query

SELECT column_a, column_b FROM table

is perfectly equivalent to

SELECT COLUMN_A, Column_B FROM Table

To preserve the case of the entities you have to quote them (for example 
because you have two distinct tables, table and Table): this forces 
the engine to take the name literally, so that the first query above is 
*not* equivalent to the following

SELECT column_a, column_b FROM table

at least not when the default case used by the engine is uppercase 
(just imagine that the database server change every non-quoted entity 
name to either upper- or lower-case, and you are close to the fact :)

So when SA thinks that the entity name it is handling is case sensitive 
(that always is when the name is CamelCased) it put automatically put 
quotes around it.

Hope this clarifies a little,
ciao, lele.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: eager load with polymorphic

2007-03-19 Thread Michael Bayer


On Mar 19, 2007, at 8:29 AM, che wrote:


 Hi,
 I tried to eager load some data from class that have reference to
 polymorhic class. The object model is:  class Base inherited by both
 Manager and Address, Managers has reference to Address - when I stop
 here SA can eager loading the addresses of managers. The problem arise
 when I add another class MailAddress that inherits Address - in this
 case address of Managers are simply lazy loaded, despite  that it is
 explicitly set to be eager. Maybe the problem is that somehow
 polymorhic and eager cannot be combined?
 please advice


thats correct, in many cases eager loading will degrade to lazy  
loading, particularly when it detects a self-referential table  
relationship (which happens often with polymorphic relationships).   
if you turn on sqlalchemy.orm logging you should see this in the logs  
to confirm.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SELECT 'a fixed string', ...

2007-03-19 Thread Michael Bayer

use literal_column('fixedstring') in the column clause

On Mar 19, 2007, at 9:38 AM, Bertrand Croq wrote:


 hi,

 I am currently using sqlalchemy to build SQL queries and it's a  
 fantastic
 tool! By now, I am looking for a way to build:

  SELECT 'a_fixed_string', atable.col1, atable.col2
  FROM atable

 using the syntax:

  select([XXX, atable.c.col1, atable.c.col2])

 but I don't know what to put at XXX.

 Does anyone have a clue ?

 -- 
 Bertrand Croq,
 Ingénieur Développement
 ___
 Net-ng  Tel  : +33 (0)223 21 21 52
 Immeuble Germanium  Fax  : +33 (0)223 21 21 60
 80 av. des Buttes de CoesmesWeb   : http://www.net-ng.com
 35000 RENNESe-mail: [EMAIL PROTECTED]
 FRANCE

 
 Ce message et tout document joint sont confidentiels. Toute  
 diffusion ou
 publication en est interdite. Si vous recevez ce message par erreur,
 merci d'en avertir immédiatement l'expéditeur par e-mail et de
 supprimer ce message et tout document joint.
 
 This message and any attachment are confidential. Any use is  
 prohibited
 except formal approval. If you receive this message in error, please
 notify the sender by return e-mail and delete this message and any
 attachment from your system.
 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLite and ON CONFLICT clause?

2007-03-19 Thread Michael Bayer

since its SQLite specific, you should just use literal DDL statements  
to create the table with this feature.

On Mar 19, 2007, at 10:57 AM, Karlo Lozovina wrote:


 Hi all,

 browsing the SQLAlchemy docs I coudn't find any references to a
 particular SQLite feature, ON CONFLICT clause (http://www.sqlite.org/
 lang_conflict.html), is there any support for this in SA? What I want
 to do is this, create my database as such:

 CREATE TABLE songs(
 id INTEGER PRIMARY KEY ON CONFLICT REPLACE,
 name TEXT
 );

 Now, when I do a bunch of INSERTs without specifying 'id' field,
 everything works as you would expect. But, when I issue a INSERT with
 a 'id' that allready exists, SQLite automatically and automagicaly
 does an UPDATE instead.

 Now, this is very handy because I don't have to do a SELECT prior to
 modifying a table. Is there a way to use this SQLite feature from SA?
 Or somehow simulate it without too much gory details?

 Thanks guys,
 klm.


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLite and ON CONFLICT clause?

2007-03-19 Thread Michael Bayer


On Mar 19, 2007, at 12:04 PM, Karlo Lozovina wrote:


 Btw, is it probable that SA will some day include SQLites' ON CONFLICT
 clause (both CREATE TABLE and INSERT OR forms)? It's rather ugly
 to have to specify my tables manually using .execute().


the issue is that every database has dozens if not hundreds of  
custom, non-SQL syntaxes when creating tables.  some method of  
allowing these syntaxes generically would have to be devised, but  
even that will not allow everyone's syntaxes to become available.  i  
definitely cant just add flags for every kind of option to the Table  
API, thered be hundreds.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cascading question

2007-03-19 Thread Michael Bayer

yes, thats ticket #249.  ive expanded the scope of that ticket since  
this particular pattern is much more general usage than what I  
initially thought it was.

anyway this will be the next ticket i fix since its pretty major.


On Mar 19, 2007, at 5:54 AM, Alexandre CONRAD wrote:

 Hello Mike,

 Here's a simple test case script that creates tables in sqlite  
 (memory).

 To resume, we have site_table table and option_table objects as a
 many-to-many relationship through a secondary weak table
 options_has_sites.

 I want to be able to:
 - delete a site without deleting the attached options.
 - delete an option without deleting the attached sites.
 - make sure the weak table is beeing cleaned up correctly depending  
 if a
 site or an option has been removed.

 Regards,
 -- 
 Alexandre CONRAD



 Michael Bayer wrote:

 the rows in the M2M table should be deleted automatically.it
 *might* require that your instances are present in the session but
 its not supposed to.  can you make me a short test case for this
 one ?  its not the first time ive heard about it.

 technically you can just put ON DELETE CASCADE on the tables too but
 id like to fix this issue.


 On Mar 16, 2007, at 12:19 PM, Alexandre CONRAD wrote:


 Humm, this doesn't help as if a site is deleted, it deletes the
 options
 that where related to that option.

 I want to be able to:
 - delete an option without deleting a site
 - delete a site without deleting an option

 just delete (clean up) the related rows inside the weak
 options_has_sites table.

 I just can't figure it out...


 Michael Bayer wrote:


 youd need to add some delete cascades to your relationship, maybe
 on just the backref (using the backref() function), e.g.

 option_mapper = assign_mapper(ctx, Option, option_table,
  properties={
  'sites':relation(Site, backref=backref(options,
 cascade=save-update, delete),
 secondary=options_has_sites, cascade=save-update),
  },
  order_by=option_table.c.name,
 )



 On Mar 16, 2007, at 10:38 AM, Alexandre CONRAD wrote:



 Hello,

 I have a many-to-many relation between an option table and a site
 table.

 Deleting an option correctly deletes the related rows in
 options_has_sites table.

 But when I delete a site, I have some orphan rows in the
 options_has_sites table. How can I avoid this ?


 # SITE TABLE ---
 site_table = Table('sites', meta,
Column('id', Integer, primary_key=True),
Column('name', Unicode(20), nullable=False, unique=True),
 )

 class Site(object):
pass

 site_mapper = assign_mapper(ctx, Site, site_table,
order_by=site_table.c.name,
 )


 # OPTION TABLE 
 option_table = Table('options', meta,
Column('id', Integer, primary_key=True),
Column('name', Unicode(20), unique=True, nullable=False),
 )

 options_has_sites = Table('sites_has_options', meta,
Column('id_site', None, ForeignKey('sites.id'),
 primary_key=True),
Column('id_option', None, ForeignKey('options.id'),
 primary_key=True),
 )

 class Option(object):
pass

 option_mapper = assign_mapper(ctx, Option, option_table,
properties={
'sites':relation(Site, backref=options,
 secondary=options_has_sites, cascade=save-update),
},
order_by=option_table.c.name,
 )


 Should I play with backref() ?

 Regards,
 -- 
 Alexandre CONRAD










 -- 
 Alexandre CONRAD - TLV FRANCE
 Research  Development








 - 
 --
 Texte inséré par Platinum 2007:

  S'il s'agit d'un mail indésirable (SPAM), cliquez sur le lien  
 suivant pour le reclasser : http://127.0.0.1:6083/Panda? 
 ID=pav_31925SPAM=true
 - 
 --






 
 from sqlalchemy import *

 meta = DynamicMetaData()

 # Sites
 site_table = Table('sites', meta,
 Column('id', Integer, primary_key=True),
 Column('name', Unicode(20), nullable=False, unique=True),
 Column('email', Unicode(100)),
 )

 class Site(object):
 pass

 site_mapper = mapper(Site, site_table,
 order_by=site_table.c.name,
 )


 # Options
 option_table = Table('options', meta,
 Column('id', Integer, primary_key=True),
 Column('name', Unicode(20), unique=True, nullable=False),
 Column('description', Unicode(40)),
 )

 options_has_sites = Table('options_has_sites', meta,
 Column('id_site', None, ForeignKey('sites.id'), primary_key=True),
 Column('id_option', None, ForeignKey('options.id'),  
 primary_key=True),
 )

 class Option(object):
 def __repr__(self):
 return Option: %s % repr(self.name)

 option_mapper = mapper(Option, option_table,
 properties={
 'sites':relation(Site, backref=options,  
 secondary=options_has_sites, cascade=save-update),
 },
 order_by=option_table.c.name,
 )

 meta.connect(sqlite://, echo=True)
 meta.create_all()

 # Make session.
 session = create_session()

 # 

[sqlalchemy] Re: eager load with polymorphic

2007-03-19 Thread che

Thanks Michael,

Michael Bayer написа:
 On Mar 19, 2007, at 8:29 AM, che wrote:
  Maybe the problem is that somehow
  polymorhic and eager cannot be combined?
  please advice
 

 thats correct, in many cases eager loading will degrade to lazy
 loading, particularly when it detects a self-referential table
 relationship (which happens often with polymorphic relationships).
 if you turn on sqlalchemy.orm logging you should see this in the logs
 to confirm.

unfortunately you are right - in the case of only Address I get in the
logs:
...
* SA: DEBUG eagerload scalar instance on [EMAIL PROTECTED]
...
and in the MailAddress case:
...
* SA: DEBUG degrade to lazy loader on [EMAIL PROTECTED]
* SA: DEBUG set instance-level lazy loader on
[EMAIL PROTECTED]
[* SA: DEBUG Executing lazy callable on [EMAIL PROTECTED]
* SA: DEBUG lazy load attribute address on instance [EMAIL PROTECTED]
...
Is it possible to workaround this behaviour somehow now (or some plans
for the SA future)?

regards,
Stefan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Inconsistent results in session.flush()

2007-03-19 Thread Michael Bayer

but sure give me the logs with sqlalchemy.orm == logging.DEBUG


On Mar 19, 2007, at 6:49 AM, svilen wrote:



 Michael Bayer wrote:
 So you can take the post_update out and update to rev 2424.


 i have a problem with this 2424, before that was ok.
 As i can't separate a short case now, here the conditions:

 multi-table-inheritance, polymorphic. Nothing else too fancy.
 table_Entity has primary db_id, hand-made/sequence obj_id and other
 data;
 While populating the database:
 ...
 * SA: INFO INSERT INTO Entity (disabled, obj_id, atype) VALUES
 (?, ?, ?)
 * SA: INFO [0, 1, 'Person']
 * SA: INFO INSERT INTO Person (name_id, db_id) VALUES (?, ?, )
 * SA: INFO [2, 1]

 and now 2423 does:
 * SA: INFO COMMIT

 while 2424 does:
 * SA: INFO INSERT INTO Entity (disabled, obj_id, db_id, atype)
 VALUES (?, ?, ?, ?)
 * SA: INFO [0, 1, 1, 'Person']
 * SA: INFO ROLLBACK


 for whatever reason it decides that the same object is dirty AND new,
 and attempts to reinsert it once more.
 Any idea? do u want logs?

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Possible to detect if a mapper property has been loaded or not?

2007-03-19 Thread Rick Morrison
I've got a sort of odd situation where I would like to detect if a (lazy)
relation property of a mapper has been already
loaded or not without triggering an automatic lazy load. Is it
possible to do that?

Thanks,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Possible to detect if a mapper property has been loaded or not?

2007-03-19 Thread Michael Bayer

i think if you just look in the object's __dict__ for the attribute  
and its not there, the lazy load didnt happen yet.

On Mar 19, 2007, at 1:04 PM, Rick Morrison wrote:

 I've got a sort of odd situation where I would like to detect if a  
 (lazy) relation property of a mapper has been already loaded or not  
 without triggering an automatic lazy load. Is it possible to do that?

 Thanks,
 Rick

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Possible to detect if a mapper property has been loaded or not?

2007-03-19 Thread Rick Morrison
Yep, that seems to work, thanks Mike.

On 3/19/07, Michael Bayer [EMAIL PROTECTED] wrote:


 i think if you just look in the object's __dict__ for the attribute
 and its not there, the lazy load didnt happen yet.

 On Mar 19, 2007, at 1:04 PM, Rick Morrison wrote:

  I've got a sort of odd situation where I would like to detect if a
  (lazy) relation property of a mapper has been already loaded or not
  without triggering an automatic lazy load. Is it possible to do that?
 
  Thanks,
  Rick
 
  


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cascading question

2007-03-19 Thread Michael Bayer
ok that ticket is fixed in the trunk


On Mar 19, 2007, at 5:54 AM, Alexandre CONRAD wrote:

 from sqlalchemy import *

 meta = DynamicMetaData()

 # Sites
 site_table = Table('sites', meta,
 Column('id', Integer, primary_key=True),
 Column('name', Unicode(20), nullable=False, unique=True),
 Column('email', Unicode(100)),
 )

 class Site(object):
 pass

 site_mapper = mapper(Site, site_table,
 order_by=site_table.c.name,
 )


 # Options
 option_table = Table('options', meta,
 Column('id', Integer, primary_key=True),
 Column('name', Unicode(20), unique=True, nullable=False),
 Column('description', Unicode(40)),
 )

 options_has_sites = Table('options_has_sites', meta,
 Column('id_site', None, ForeignKey('sites.id'), primary_key=True),
 Column('id_option', None, ForeignKey('options.id'),  
 primary_key=True),
 )

 class Option(object):
 def __repr__(self):
 return Option: %s % repr(self.name)

 option_mapper = mapper(Option, option_table,
 properties={
 'sites':relation(Site, backref=options,  
 secondary=options_has_sites, cascade=save-update),
 },
 order_by=option_table.c.name,
 )

 meta.connect(sqlite://, echo=True)
 meta.create_all()

 # Make session.
 session = create_session()

 # Inject sites and options.
 for i in range(1, 4):
 o = Option()
 o.name, o.description = opt%d % i, This is option %d % i
 session.save(o)
 s = Site()
 s.name, s.email = site%d % i, [EMAIL PROTECTED] % i
 session.save(s)

 session.flush()
 session.clear()

 print \n### Now, let's query for site 1.
 s = session.query(Site).get(1)
 opts = session.query(Option).select()
 print \n### Add options from 1 to 3 to the site.
 s.options = opts[0:3] # Put option 1, 2 and 3
 session.flush()
 session.clear()

 print \n### Now, let's query for site 1 again.
 s = session.query(Site).get(1)
 print \n### Check the site has the options 1, 2 and 3.
 print s.options, !!!we should have opt 1, 2 and 3 here!!!
 session.clear()

 print \n### If it has, we should have pairs of (id_site, id_opt)  
 in options_has_sites.
 print options_has_sites.select().execute().fetchall(), !!!we  
 should have 3 pairs here!!!

 print \n### Now, let's query for option 1.
 o = session.query(Option).get(1)
 print \n### Now remove option 1.
 session.delete(o)
 session.flush()
 session.clear()

 print \n### Now, let's query for site 1 again.
 s = session.query(Site).get(1)
 print \n### Check what options has the site. Option 1 should be  
 removed from sites.
 print s.options, !we should only have opt2 and opt3 in  
 here!
 session.clear()

 print \n### Now check that the row (id_site, id_opt) for option  
 1 should be removed from table options_has_sites.
 print options_has_sites.select().execute().fetchall(), !!!we  
 should only have 2 pairs now that opt1 was removed!!!

 print \n### Let's query for site 1.
 s = session.query(Site).get(1)
 print \n### Now let's delete the site.
 session.delete(s)
 session.flush()
 session.clear()

 print \n### We should still have option 2 and 3, even if the  
 site was deleted.
 print session.query(Option).select()

 print \n### The rows (id_site, id_opt) for option 2 and 3 of the  
 site should be removed from options_has_sites.
 print options_has_sites.select().execute().fetchall(), !We  
 want this cleaned up and empty now that we have removed the site.  
 How to do that ??!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: pyodbc and tables with triggers

2007-03-19 Thread polaar

Sorry for the delay ;-) No objections. I was just trying to figure out
what was happening. (didn't get at first that module selection is a
bit ugly also meant that it didn't work if you specified the module)
But you already have a patch it seems. I'll try and test it out.

Steven

On 15 mrt, 15:43, Rick Morrison [EMAIL PROTECTED] wrote:
 Sorry, Stephen, I replied too early; your second email arrived before the
 first. A whole day before the first.

 So until we get a real cleanup, you're looking to try modules in this order:

   ['pyodbc', 'adodbapi', 'pymssql']

 Sounds OK to me -- any objections out there?

 Rick

 On 3/14/07, Rick Morrison [EMAIL PROTECTED] wrote:



  It's the second case, that is, it sniffs out what modules are installed.
  As I said before, this
  (along with other modules that effectively do the same thing), is up for a 
  clean-up soon, see ticket #480.

  Rick

  On 3/14/07, polaar [EMAIL PROTECTED] wrote:

{'pyodbc': use_pyodbc, 'adodbapi': use_adodbapi, 'pyodbc':
use_pyodbc}.get(module.__name__, use_default)()

   Sorry, should be pymssql instead of pyodbc twice, but I guess you got
   that...


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: server_side_cursors

2007-03-19 Thread Michael Bayer

OK, ive added ticket #514 to remind me to evaluate/test the patch  
out..will try to commit by tomorrow.

On Mar 18, 2007, at 3:16 PM, Steve Huffman wrote:

 Here's a patch that add's a DelayedMetaResultProxy(ResultProxy) class.

 In the case of fetchone and fetchmany, it overwrites itself with the
 ResultProxy's version after the first call. Actually, fetchone calls
 the super class's initially and simply sets the metadata before
 returning. I would have done that for all of the affected functions
 with a decorator, but the close() call in the others causes trouble.

 On 3/17/07, Michael Bayer [EMAIL PROTECTED] wrote:

 no its a psycopg thing, its like this:

 # server side cursor (giving it a name makes it server side.
 psycopg2 devs think thats a good API.)
 cursor = connection.cursor(x)

 # execute.  cursor now has pending results.
 cursor.execute(select * from table)

 SQLAlchemy's result wrapper, ResultProxy, then calls:

 metadata = cursor.metadata

 in order to get information about the columns in the result set.  but
 *this fails*, because we are using server side cursors, and
 metadata apparently does not know to fetch its info from the
 server.  doesnt happen on my machine.  not sure if this changes due
 to psycopg2 versions, PG setup, or what.  if we can determine its a
 psycopg2 version issue, then everyone can just upgrade.

 anyway what psycopg wants you to do is:

 row = cursor.fetchone()
 metadata = cursor.metadata

 then it works.   a tweaked out ResultProxy is needed to handle this
 extra complexity in this case.

 if youre saying we shouldnt use metadata at all, that doesnt work
 because ResultProxy is globally used for all results regardless of
 their source, textual, column-based, and otherwise.  also the
 databases will often not give you back the same column name as what
 you gave it (case conventions, etc) and in some cases we dont even
 have a column name to start with (like select some_function()).

 On Mar 17, 2007, at 8:41 PM, Steve Huffman wrote:


 I may be missing something fundamental here, but why doesn't it
 already know the metadata since I defined the columns in which I'm
 interested?

 thing_table = sa.Table(thing, md,  sa.Column('id', sa.Integer,
 primary_key = True))

 On 3/17/07, Michael Bayer [EMAIL PROTECTED] wrote:

 the cursor metadata often cannot be read until fetchone() is   
 called
 first.  the current result set implementation we have doesnt call
 fetchone() before it tries to get the metadata, and normally it
 shouldnt (since the result set doesnt even know if its the result
 of a
 select/insert/whatever).   id like an alternate result set class
 to go
 into effect when PG/server side cursors/select is used to do  
 this, i
 think someone was supposed to send a patch.  its hard for me to
 develop since my version of PG 8.1 doesnt seem to reproduce the
 issue.

 On Mar 17, 8:14 pm, [EMAIL PROTECTED]
 [EMAIL PROTECTED] wrote:
 I was excited to see the server_side_cursors option that was added
 recently.

 I saw the reports of it not working with autoload = True, but I've
 been having trouble getting it to work at all.

 When attempting to select a row using:

 t2.select().execute().fetchone()

 I get:

 INFO sqlalchemy.engine.base.Engine.0x..d0 SELECT thing.id FROM  
 thing
 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
 Traceback (most recent call last):  File stdin, line 1, in ?
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py,
 line
 811, in __repr__
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py,
 line
 671, in _get_col
   File build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py,
 line
 659, in _convert_key
 sqlalchemy.exceptions.NoSuchColumnError: Could not locate  
 column in
 row for column '0'

 This query runs fine without server_side_cursors = True

 Any suggestions?











 
 postgres_delay_metadata.patch


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to model an odd many-to-many relationship?

2007-03-19 Thread Mike Kent

I have a many-to-many relationship between two entities, call them A
and B.  This is easy enough to model in sqlalchemy using Elixir.
However, to complicate things, I need an integer column somewhere
called 'priority'.  In the relationship between an A and multiple Bs,
I want the Bs to be ordered by the value of the 'priority' column.  In
the relationship between a B and multiple As, the value of 'priority'
is irrelevant.

The problem is, I don't know what entity needs to have the 'priority'
column in it, or how to model this in the relationship between A and
B.  I thought that maybe the 'priority' column should be in the
secondary table that handles the many-to-many relationship between A
and B, but I'm not sure how to set that up, and I'm not sure that
Elixir can handle that at all.

I'd appreciate it if someone can point me in the right direction on
this.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Issue using rev 2425

2007-03-19 Thread Michael Bayer
OK the bad thing is that theres no way ppl are going to be able to  
figure out a relation like this one unless I add crazy amounts of  
docs...anyway, the answer is:

mapper(Section, section_table, properties = {
 'items': relation(Item, backref='section'),
  'keywords':relation(Keyword,
 primaryjoin=and_(
 keyword_table.c.uuid==itemkeyword_table.c.keyword_uuid,
 item_table.c.uuid==itemkeyword_table.c.item_uuid,
 section_table.c.id==item_table.c.section_id),
 viewonly=True, foreign_keys=[keyword_table.c.uuid], remote_side= 
[item_table.c.section_id])
 }
)

the foreignkey argument is deprecated, and in the trunk the entire  
system of calculating what does it mean when we say A relates to B  
has been highly clarified and strictified, as well as the way it  
calculates the lazy clause which is the part that was breaking for  
you.  that logic doesnt find the things it wants to in that big join  
condition you gave it since it cant reconcile section_table against  
keyword_table in such a way that it also knows where to put the ?  
for the lazy clause (in previous versions, it made a very good guess,  
which guessed right for things like the above but guessed wrong for a  
lot of other stuff.   now it doesnt guess so much).  of course the  
improvment to make here is to antipicate that particular error and  
raise a nicer message earlier on.

anyway in the above, you give it the foreign key (or however many you  
want to describe) in foreign_keys, and then to tell the lazy loader  
which columns to put the ? in you use remote_side.  the above values  
are a little artificial to make the right thing happen.

alternatively, you can just force your own lazy loader, perhaps I  
should add a recipe for this...its just as functional (except wont  
work for an eager load) and is much more blunt in its intention:

from sqlalchemy.orm.session import attribute_manager
def load_keywords(instance):
 def lazyload():
 session = object_session(instance)
 return session.query(Keyword).select(and_(
 keyword_table.c.uuid==itemkeyword_table.c.keyword_uuid,
 item_table.c.uuid==itemkeyword_table.c.item_uuid,
 instance.id==item_table.c.section_id))
 return lazyload
attribute_manager.register_attribute(Section, 'keywords',  
uselist=True, callable_=load_keywords)

of course thats a variant on whats in the docs, which is just to use  
property to make your own function (which IMHO makes the viewonly  
flag not really needed).

On Mar 19, 2007, at 9:56 PM, Steve Zatz wrote:

 import uuid as uuid_
 from sqlalchemy import *

 def get_uuid():
 a = str(uuid_.uuid4())
 a = a.strip('{}')
 return a.replace('-','')


 engine = create_engine('sqlite://')

 metadata = BoundMetaData(engine)
 engine.echo = True #True


 item_table = Table('item',metadata,
   Column('id', Integer, primary_key=True),
   Column('uuid',String(32), unique=True, nullable=False),
   Column('parent_uuid', String(32), ForeignKey 
 ('item.uuid'), nullable=True),
   Column('name',String(150)),
   Column('section_id', Integer, ForeignKey('section.id'))
 )

 section_table = Table('section', metadata,
  Column('id', Integer, primary_key=True),
  Column('name', String(25), unique=True,  
 nullable=False),
 )

 keyword_table = Table('keyword', metadata,
  Column('uuid',String(32), primary_key=True),
  Column('name', String(25), unique=True,  
 nullable=False)
 )

 itemkeyword_table = Table('item_keyword', metadata,
   Column('item_uuid', String(32),ForeignKey 
 ('item.uuid'), primary_key=True),
   Column('keyword_uuid', String(32), ForeignKey 
 ('keyword.uuid'), primary_key=True),
 )

 metadata.create_all()

 # class definitions
 class Item(object):
 def __init__(self, name=None, id=None, uuid=None, **kw):  #?  
 **kw  for x,y in kw, setattr(x) = y
 self.name = name
 self.id = id
 if uuid:
 self.uuid = uuid
 else:
 self.uuid = get_uuid()
 for k in kw:
   setattr(self, k, kw[k])


 def _get_keywords(self):
 return [ik.keyword for ik in self.itemkeywords]

 keywords = property(_get_keywords)

 class Section(object):
 def __init__(self, name=None, id=None):
 self.name = name
 self.id = id


 class Keyword(object):
 def __init__(self, name=None, id=None, uuid=None):
 self.name = name
 self.id = id
 if uuid:
 self.uuid = uuid
 else:
 self.uuid = get_uuid()


 class ItemKeyword(object):
 def __init__(self, keyword=None):
 if keyword:
 self.keyword = keyword


 mapper(Section, section_table, properties = {'items': relation 
 (Item, backref='section'), 'keywords':relation(Keyword,

[sqlalchemy] Replicating a transaction

2007-03-19 Thread Benno Rice

Hi,

I'm wondering if it would be possible to have a situation where  
transactions coming in from the ORM system could be sent to multiple  
database servers at once.

If I were to look into adding this, where would be the best place to  
start?

Many thanks.

-- 
Benno Rice
[EMAIL PROTECTED]
http://jeamland.net/



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---