Re: [sqlalchemy] mapper on a temporary class

2021-01-27 Thread Kent Bower
Excellent.  As always, thanks very much for your time and answers (let
alone awesome software)!


On Wed, Jan 27, 2021 at 1:25 PM Mike Bayer  wrote:

>
>
> On Tue, Jan 26, 2021, at 9:01 PM, Kent Bower wrote:
>
> Thanks a ton for your responses.
>
> Do all the normal columns of an aliased class need to match the ad-hoc
> select to which I map the alias?
>
>
> handwavy handwavy sort of yes, sort of no?there needs to be a 1-1
> correspondence of columns, yes, aliasedclass can't define additional
> attributes that arent there normally.  as to how they are lined up, there's
> an option to do it by name in the aliased() function adapt_on_names.
> otherwise it's expected that your select() is derived from the Table
> objects that are originally mapped.
>
>
>
> > oh if the class doesn't have a mapper, then defintiely, just make
> ad-hoc subclasses of it and map to those.vastly easier that way.
>
> Mapping to a subclass would be different from mapping directly to an
> ad-hoc class?
>
>
> if the class itself is ad-hoc, then you're fine, you can make new classes
> with mappers all you want and they are GCed if you lose them (assuming no
> relationships involving long-lived classes with backrefs), yes.
>
>
>
>
>
>
>  (Mostly I’m concerned whether the very act of adding a mapper to a class
> will keep the reference and prevent its garbage collection or in some other
> way modify the “main” mappers, especially if this is done via a thread.
> Like, would that modify the compiled mappers for the entire process...
> these are the things running through my head.)
>
>
> the configure_mappers() function these days is threadsafe, it uses a
> mutex, so you are "good" there but there's a mutex involved if you are
> looking for high levels of concurrency.   if your ad-hoc classes are not
> subclasses of any long lived classes and if you are careful to limit
> relationships to only point to long lived classes and not have any
> backrefs, it should be OK.  I'd test it though :)set up a weakref.ref()
> to your ad-hoc mapper objects and make sure those ref objects get their
> callback hook invoked.
>
>
>
>
>
>
> On Tue, Jan 26, 2021 at 8:18 PM Mike Bayer 
> wrote:
>
>
>
>
> On Tue, Jan 26, 2021, at 7:31 PM, Kent Bower wrote:
>
> I should have given these details from the get-go:  the use case is a
> specialized select() (dynamically built) which would be extremely
> convenient to map relationships against for convenience in subquery
> loading, etc. So, the class would not already have a mapper.  Can I pass
> non_primary=True anyway, or won’t this work?
>
>
> oh if the class doesn't have a mapper, then defintiely, just make ad-hoc
> subclasses of it and map to those.vastly easier that way.
>
>
>
>
>
>
> On Tue, Jan 26, 2021 at 6:18 PM Mike Bayer 
> wrote:
>
>
>
>
> On Tue, Jan 26, 2021, at 3:16 PM, Kent wrote:
>
> Question: if I add a mapper to a class that is only needed temporarily,
> does using the mapper compile it along side my "normal" mappers such that
> I'll leak memory when I mean for the class to be garbage collected?
>
> Put another way, can I add a mapper to a class that doesn't influence my
> "main mappers" and gets thrown away when the class is thrown away or is
> that not possible?
>
>
> Assuming the class already has a mapper, and this is an additional mapper
> that maps the class in a different way, this feature is called a
> "non-primary mapper" and requires that you pass the non_primary=True flag.
>   If you are using non-primary mappers, then yes they get garbage collected.
>
> This is also a deprecated feature that won't be in SQLAlchemy 2.0.The
> only use case we can identify for "non-primary mappers" is when you want to
> create a relationship() to one, and for that use case we now support the
> "Relationship to AliasedClass" pattern.
>
> For anything else involving retrieving a class from an alternate
> selectable of some kind, you should be able to use AliasedClass for, e.g.
> sqlalchemy.orm.aliased(), so I would recommend using that instead unless
> you are on a very old version of SQLAlchemy.
>
>
>
>
>
>
>
>
> Thanks in advance,
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe 

Re: [sqlalchemy] mapper on a temporary class

2021-01-26 Thread Kent Bower
Thanks a ton for your responses.

Do all the normal columns of an aliased class need to match the ad-hoc
select to which I map the alias?

> oh if the class doesn't have a mapper, then defintiely, just make ad-hoc
subclasses of it and map to those.vastly easier that way.

Mapping to a subclass would be different from mapping directly to an ad-hoc
class?  (Mostly I’m concerned whether the very act of adding a mapper to a
class will keep the reference and prevent its garbage collection or in some
other way modify the “main” mappers, especially if this is done via a
thread. Like, would that modify the compiled mappers for the entire
process... these are the things running through my head.)


On Tue, Jan 26, 2021 at 8:18 PM Mike Bayer  wrote:

>
>
> On Tue, Jan 26, 2021, at 7:31 PM, Kent Bower wrote:
>
> I should have given these details from the get-go:  the use case is a
> specialized select() (dynamically built) which would be extremely
> convenient to map relationships against for convenience in subquery
> loading, etc. So, the class would not already have a mapper.  Can I pass
> non_primary=True anyway, or won’t this work?
>
>
> oh if the class doesn't have a mapper, then defintiely, just make ad-hoc
> subclasses of it and map to those.vastly easier that way.
>
>
>
>
>
> On Tue, Jan 26, 2021 at 6:18 PM Mike Bayer 
> wrote:
>
>
>
>
> On Tue, Jan 26, 2021, at 3:16 PM, Kent wrote:
>
> Question: if I add a mapper to a class that is only needed temporarily,
> does using the mapper compile it along side my "normal" mappers such that
> I'll leak memory when I mean for the class to be garbage collected?
>
> Put another way, can I add a mapper to a class that doesn't influence my
> "main mappers" and gets thrown away when the class is thrown away or is
> that not possible?
>
>
> Assuming the class already has a mapper, and this is an additional mapper
> that maps the class in a different way, this feature is called a
> "non-primary mapper" and requires that you pass the non_primary=True flag.
>   If you are using non-primary mappers, then yes they get garbage collected.
>
> This is also a deprecated feature that won't be in SQLAlchemy 2.0.The
> only use case we can identify for "non-primary mappers" is when you want to
> create a relationship() to one, and for that use case we now support the
> "Relationship to AliasedClass" pattern.
>
> For anything else involving retrieving a class from an alternate
> selectable of some kind, you should be able to use AliasedClass for, e.g.
> sqlalchemy.orm.aliased(), so I would recommend using that instead unless
> you are on a very old version of SQLAlchemy.
>
>
>
>
>
>
>
>
> Thanks in advance,
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/df03ce24-3986-450c-8c3a-a020d5d6adccn%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/df03ce24-3986-450c-8c3a-a020d5d6adccn%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/IAqOQFpiB20/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/c8f7f84e-af90-450c-9289-ed84fc6996a9%40www.fastmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/c8f7f84e-af90-450c-9289-ed84fc6996a9%40www.fastmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Comple

Re: [sqlalchemy] mapper on a temporary class

2021-01-26 Thread Kent Bower
I should have given these details from the get-go:  the use case is a
specialized select() (dynamically built) which would be extremely
convenient to map relationships against for convenience in subquery
loading, etc. So, the class would not already have a mapper.  Can I pass
non_primary=True anyway, or won’t this work?


On Tue, Jan 26, 2021 at 6:18 PM Mike Bayer  wrote:

>
>
> On Tue, Jan 26, 2021, at 3:16 PM, Kent wrote:
>
> Question: if I add a mapper to a class that is only needed temporarily,
> does using the mapper compile it along side my "normal" mappers such that
> I'll leak memory when I mean for the class to be garbage collected?
>
> Put another way, can I add a mapper to a class that doesn't influence my
> "main mappers" and gets thrown away when the class is thrown away or is
> that not possible?
>
>
> Assuming the class already has a mapper, and this is an additional mapper
> that maps the class in a different way, this feature is called a
> "non-primary mapper" and requires that you pass the non_primary=True flag.
>   If you are using non-primary mappers, then yes they get garbage collected.
>
> This is also a deprecated feature that won't be in SQLAlchemy 2.0.The
> only use case we can identify for "non-primary mappers" is when you want to
> create a relationship() to one, and for that use case we now support the
> "Relationship to AliasedClass" pattern.
>
> For anything else involving retrieving a class from an alternate
> selectable of some kind, you should be able to use AliasedClass for, e.g.
> sqlalchemy.orm.aliased(), so I would recommend using that instead unless
> you are on a very old version of SQLAlchemy.
>
>
>
>
>
>
>
>
> Thanks in advance,
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/df03ce24-3986-450c-8c3a-a020d5d6adccn%40googlegroups.com
> 
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/IAqOQFpiB20/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/c8f7f84e-af90-450c-9289-ed84fc6996a9%40www.fastmail.com
> 
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CANnF6aEAcuCRJrAQDWYrDGVmv5k7OuXNC_AXgrqDO00L3j%3D%2BAg%40mail.gmail.com.


Re: [sqlalchemy] connection close() questions

2020-05-14 Thread Kent Bower
Returned to pool in rolled back state now, thanks.

However, the script I sent in this post now hits the "SAWarning: Reset
agent is not active.  This should not occur unless there was already a
connectivity error in progress." on the conn.close() call.

Did you expect that because my usage pattern is "illegal" so to speak?
(The Warning isn't quite accurate regarding "... unless there was already a
connectivity error in progress ")


On Wed, May 13, 2020 at 1:21 PM Mike Bayer  wrote:

> this is getting released today in any case so, just look for any more
> warnings or conditions like this.  the most important part is getting the
> test coverage in so as I refactor for 1.4 / 2.0 the behavioral contract is
> maintained.  thanks!
>
>
>
> On Wed, May 13, 2020, at 1:16 PM, Kent Bower wrote:
>
> Very good, will do when I find time.
>
> Thank you!
>
>
> On Wed, May 13, 2020 at 1:07 PM Mike Bayer 
> wrote:
>
>
> feel free to test the patch at:
>
> https://gerrit.sqlalchemy.org/c/sqlalchemy/sqlalchemy/+/1965
>
> this patch includes that if the transaction state at the engine level gets
> screwed up, the pool will warn and still make sure it does a real
> rollback.you should not see this warning however.
>
> in 2.0, the whole "reset" logic is simplified so that none of this
> complexity will be there.
>
>
>
> On Wed, May 13, 2020, at 11:59 AM, Kent Bower wrote:
>
> LOL, you're welcome, I'm such a great tester, aren't I?
>
> Anyway, everything after 0.9.2 behaved this way.  0.9.1 did a rollback.
>
> On Wed, May 13, 2020 at 11:54 AM Mike Bayer 
> wrote:
>
>
> nevermind, you've managed to find a case that trips it up for the
> connection pool
>
> release today
>
>
>
> On Wed, May 13, 2020, at 11:51 AM, Mike Bayer wrote:
>
>
>
> On Wed, May 13, 2020, at 11:39 AM, Kent Bower wrote:
>
> In this script, conn.close() does *not *call rollback on the
> transaction.  It isn't just a logging issue as I've verified from the
> database that the session was not rolled back.
>
>
> I can confirm that in master only where things have changed
> dramatically.   Should not be the case for any released version, please
> confirm
>
>
>
> On Wed, May 13, 2020 at 11:31 AM Mike Bayer 
> wrote:
>
>
> Haven't looked deeply but so far what you need to know is that
> conn.close() *ALWAYS* rolls back the transaction, just not at the Engine
> level, it's at the connection pool level so you won't see it when logging /
> event hooking on the Engine.  turn on echo_pool and you will see this, in
> modern versions:
>
> 2020-05-13 11:25:45,106 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> being returned to pool
> 2020-05-13 11:25:45,107 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> rollback-on-return, via agent
>
> your DBSession is not going to close the connection because you have it
> bound directly to that connection, rather than to the engine, so it assumes
> it is participating in a larger transaction.   1.4 does amend this behavior
> to be more clear cut as we are doing away with the "nested" behaviors of
> Connection.  So yes I would not be relying upon DBSession.close() as a
> means of transaction control if the session is bound to a connection
> directly.  If the session is bound to a connection I would advise ensuring
> that connection is in a transaction on the outside that you are managing.
>
>
>
>
>
>
>
> On Wed, May 13, 2020, at 10:51 AM, Kent wrote:
>
> Mike, et al.,
>
> I've got some questions about closing connections.  I suspect my framework
> may be at fault, but there is potentially a sqlalchemy issue here as well.
>
> See attached script with nested transaction and explicit
> connection.close().
>
> Things are even more complex because versions have handled this
> differently in the past:
>
>- on rel_0_9_1 and ealier, the conn.close() *always *actually emitted
>a DBAPI ROLLBACK, but on rel_0_9_2+, the previous call to .begin_nested()
>now prevents the DBAPI ROLLBACK call, even though the close() is on the
>connection itself.  I'm not sure if that was an intended change, but it
>seems .close() on a connection should always cause ROLLBACK, no?
>- rel_1_3_9 and earlier this code
>raises sqlalchemy.exc.ResourceClosedError on the last DBSession.close() as
>it invokes the registered 'rollback' event with an already-closed
>connection, but on current master (1.4.0b1) there is no exception since a
>rollback isn't attempted, leaving the db connection in idle transaction.
>
>
> On all vers

Re: [sqlalchemy] connection close() questions

2020-05-13 Thread Kent Bower
Very good, will do when I find time.

Thank you!


On Wed, May 13, 2020 at 1:07 PM Mike Bayer  wrote:

> feel free to test the patch at:
>
> https://gerrit.sqlalchemy.org/c/sqlalchemy/sqlalchemy/+/1965
>
> this patch includes that if the transaction state at the engine level gets
> screwed up, the pool will warn and still make sure it does a real
> rollback.you should not see this warning however.
>
> in 2.0, the whole "reset" logic is simplified so that none of this
> complexity will be there.
>
>
>
> On Wed, May 13, 2020, at 11:59 AM, Kent Bower wrote:
>
> LOL, you're welcome, I'm such a great tester, aren't I?
>
> Anyway, everything after 0.9.2 behaved this way.  0.9.1 did a rollback.
>
> On Wed, May 13, 2020 at 11:54 AM Mike Bayer 
> wrote:
>
>
> nevermind, you've managed to find a case that trips it up for the
> connection pool
>
> release today
>
>
>
> On Wed, May 13, 2020, at 11:51 AM, Mike Bayer wrote:
>
>
>
> On Wed, May 13, 2020, at 11:39 AM, Kent Bower wrote:
>
> In this script, conn.close() does *not *call rollback on the
> transaction.  It isn't just a logging issue as I've verified from the
> database that the session was not rolled back.
>
>
> I can confirm that in master only where things have changed
> dramatically.   Should not be the case for any released version, please
> confirm
>
>
>
> On Wed, May 13, 2020 at 11:31 AM Mike Bayer 
> wrote:
>
>
> Haven't looked deeply but so far what you need to know is that
> conn.close() *ALWAYS* rolls back the transaction, just not at the Engine
> level, it's at the connection pool level so you won't see it when logging /
> event hooking on the Engine.  turn on echo_pool and you will see this, in
> modern versions:
>
> 2020-05-13 11:25:45,106 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> being returned to pool
> 2020-05-13 11:25:45,107 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> rollback-on-return, via agent
>
> your DBSession is not going to close the connection because you have it
> bound directly to that connection, rather than to the engine, so it assumes
> it is participating in a larger transaction.   1.4 does amend this behavior
> to be more clear cut as we are doing away with the "nested" behaviors of
> Connection.  So yes I would not be relying upon DBSession.close() as a
> means of transaction control if the session is bound to a connection
> directly.  If the session is bound to a connection I would advise ensuring
> that connection is in a transaction on the outside that you are managing.
>
>
>
>
>
>
>
> On Wed, May 13, 2020, at 10:51 AM, Kent wrote:
>
> Mike, et al.,
>
> I've got some questions about closing connections.  I suspect my framework
> may be at fault, but there is potentially a sqlalchemy issue here as well.
>
> See attached script with nested transaction and explicit
> connection.close().
>
> Things are even more complex because versions have handled this
> differently in the past:
>
>- on rel_0_9_1 and ealier, the conn.close() *always *actually emitted
>a DBAPI ROLLBACK, but on rel_0_9_2+, the previous call to .begin_nested()
>now prevents the DBAPI ROLLBACK call, even though the close() is on the
>connection itself.  I'm not sure if that was an intended change, but it
>seems .close() on a connection should always cause ROLLBACK, no?
>- rel_1_3_9 and earlier this code
>raises sqlalchemy.exc.ResourceClosedError on the last DBSession.close() as
>it invokes the registered 'rollback' event with an already-closed
>connection, but on current master (1.4.0b1) there is no exception since a
>rollback isn't attempted, leaving the db connection in idle transaction.
>
>
> On all versions since rel_0_9_1, even after both of the script's finally
> clauses (close() statements) but before the program terminates, *the
> transaction is still left in transaction in the database, though the
> connection's been checked back into the pool.*
>
> As far as whether my code here is badly formed, my question is: is it
> wrong to mix session closing and connection closing or should that be fine?
>
> (My actual application is obviously more complex, with zope.sqlalchemy &
> transaction and frameworks; I boiled it down to this script for demo
> purposes and removed those libraries, making this code look weirder.)
>
> Thanks in advance!
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Comple

Re: [sqlalchemy] connection close() questions

2020-05-13 Thread Kent Bower
LOL, you're welcome, I'm such a great tester, aren't I?

Anyway, everything after 0.9.2 behaved this way.  0.9.1 did a rollback.

On Wed, May 13, 2020 at 11:54 AM Mike Bayer 
wrote:

> nevermind, you've managed to find a case that trips it up for the
> connection pool
>
> release today
>
>
>
> On Wed, May 13, 2020, at 11:51 AM, Mike Bayer wrote:
>
>
>
> On Wed, May 13, 2020, at 11:39 AM, Kent Bower wrote:
>
> In this script, conn.close() does *not *call rollback on the
> transaction.  It isn't just a logging issue as I've verified from the
> database that the session was not rolled back.
>
>
> I can confirm that in master only where things have changed
> dramatically.   Should not be the case for any released version, please
> confirm
>
>
>
> On Wed, May 13, 2020 at 11:31 AM Mike Bayer 
> wrote:
>
>
> Haven't looked deeply but so far what you need to know is that
> conn.close() *ALWAYS* rolls back the transaction, just not at the Engine
> level, it's at the connection pool level so you won't see it when logging /
> event hooking on the Engine.  turn on echo_pool and you will see this, in
> modern versions:
>
> 2020-05-13 11:25:45,106 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> being returned to pool
> 2020-05-13 11:25:45,107 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> rollback-on-return, via agent
>
> your DBSession is not going to close the connection because you have it
> bound directly to that connection, rather than to the engine, so it assumes
> it is participating in a larger transaction.   1.4 does amend this behavior
> to be more clear cut as we are doing away with the "nested" behaviors of
> Connection.  So yes I would not be relying upon DBSession.close() as a
> means of transaction control if the session is bound to a connection
> directly.  If the session is bound to a connection I would advise ensuring
> that connection is in a transaction on the outside that you are managing.
>
>
>
>
>
>
>
> On Wed, May 13, 2020, at 10:51 AM, Kent wrote:
>
> Mike, et al.,
>
> I've got some questions about closing connections.  I suspect my framework
> may be at fault, but there is potentially a sqlalchemy issue here as well.
>
> See attached script with nested transaction and explicit
> connection.close().
>
> Things are even more complex because versions have handled this
> differently in the past:
>
>- on rel_0_9_1 and ealier, the conn.close() *always *actually emitted
>a DBAPI ROLLBACK, but on rel_0_9_2+, the previous call to .begin_nested()
>now prevents the DBAPI ROLLBACK call, even though the close() is on the
>connection itself.  I'm not sure if that was an intended change, but it
>seems .close() on a connection should always cause ROLLBACK, no?
>- rel_1_3_9 and earlier this code
>raises sqlalchemy.exc.ResourceClosedError on the last DBSession.close() as
>it invokes the registered 'rollback' event with an already-closed
>connection, but on current master (1.4.0b1) there is no exception since a
>rollback isn't attempted, leaving the db connection in idle transaction.
>
>
> On all versions since rel_0_9_1, even after both of the script's finally
> clauses (close() statements) but before the program terminates, *the
> transaction is still left in transaction in the database, though the
> connection's been checked back into the pool.*
>
> As far as whether my code here is badly formed, my question is: is it
> wrong to mix session closing and connection closing or should that be fine?
>
> (My actual application is obviously more complex, with zope.sqlalchemy &
> transaction and frameworks; I boiled it down to this script for demo
> purposes and removed those libraries, making this code look weirder.)
>
> Thanks in advance!
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/a9d73e26-84c6-4dcf-bb6d-82f541fa1f6d%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/a9d73e26-84c6-4dcf-bb6d-82f541fa1f6d%40googlegroups.com?utm_medium=email_source=footer>

Re: [sqlalchemy] connection close() questions

2020-05-13 Thread Kent Bower
In this script, conn.close() does *not *call rollback on the transaction.
It isn't just a logging issue as I've verified from the database that the
session was not rolled back.

On Wed, May 13, 2020 at 11:31 AM Mike Bayer 
wrote:

> Haven't looked deeply but so far what you need to know is that
> conn.close() *ALWAYS* rolls back the transaction, just not at the Engine
> level, it's at the connection pool level so you won't see it when logging /
> event hooking on the Engine.  turn on echo_pool and you will see this, in
> modern versions:
>
> 2020-05-13 11:25:45,106 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> being returned to pool
> 2020-05-13 11:25:45,107 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> rollback-on-return, via agent
>
> your DBSession is not going to close the connection because you have it
> bound directly to that connection, rather than to the engine, so it assumes
> it is participating in a larger transaction.   1.4 does amend this behavior
> to be more clear cut as we are doing away with the "nested" behaviors of
> Connection.  So yes I would not be relying upon DBSession.close() as a
> means of transaction control if the session is bound to a connection
> directly.  If the session is bound to a connection I would advise ensuring
> that connection is in a transaction on the outside that you are managing.
>
>
>
>
>
>
>
> On Wed, May 13, 2020, at 10:51 AM, Kent wrote:
>
> Mike, et al.,
>
> I've got some questions about closing connections.  I suspect my framework
> may be at fault, but there is potentially a sqlalchemy issue here as well.
>
> See attached script with nested transaction and explicit
> connection.close().
>
> Things are even more complex because versions have handled this
> differently in the past:
>
>- on rel_0_9_1 and ealier, the conn.close() *always *actually emitted
>a DBAPI ROLLBACK, but on rel_0_9_2+, the previous call to .begin_nested()
>now prevents the DBAPI ROLLBACK call, even though the close() is on the
>connection itself.  I'm not sure if that was an intended change, but it
>seems .close() on a connection should always cause ROLLBACK, no?
>- rel_1_3_9 and earlier this code
>raises sqlalchemy.exc.ResourceClosedError on the last DBSession.close() as
>it invokes the registered 'rollback' event with an already-closed
>connection, but on current master (1.4.0b1) there is no exception since a
>rollback isn't attempted, leaving the db connection in idle transaction.
>
>
> On all versions since rel_0_9_1, even after both of the script's finally
> clauses (close() statements) but before the program terminates, *the
> transaction is still left in transaction in the database, though the
> connection's been checked back into the pool.*
>
> As far as whether my code here is badly formed, my question is: is it
> wrong to mix session closing and connection closing or should that be fine?
>
> (My actual application is obviously more complex, with zope.sqlalchemy &
> transaction and frameworks; I boiled it down to this script for demo
> purposes and removed those libraries, making this code look weirder.)
>
> Thanks in advance!
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/a9d73e26-84c6-4dcf-bb6d-82f541fa1f6d%40googlegroups.com
> 
> .
>
>
> *Attachments:*
>
>- connection-close.py
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/Lit5HWFiC0U/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/6f84e3a9-7d28-44e6-9e13-3f541aac95e4%40www.fastmail.com
> 
> .
>

-- 

Re: [sqlalchemy] Re: deferred column_properties should probably not be expired unless they were already loaded

2017-05-10 Thread Kent Bower
Thanks very much!

On Wed, May 10, 2017 at 2:24 PM, mike bayer 
wrote:

> this is all patched in 1.2, your original test works too.
>
> The fix here is a little too intricate for 1.1 right now as this is a very
> long-standing bug(goes back to 0.7 at least and probably further) and
> 1.1 is getting near maintenance mode.
>
>
>
> On 05/10/2017 01:48 PM, mike bayer wrote:
>
>> nevermind, the issue is at
>>
>> https://bitbucket.org/zzzeek/sqlalchemy/issues/3984/deferred
>> -column_property-gets-set-to
>>
>> the fix is not as obvious as that, that particular check is assuming a
>> column_property() where its value was never present in __dict__ in the
>> first place, so it needs to be marked "expired".
>>
>>
>> On 05/10/2017 01:38 PM, Kent wrote:
>>
>>> The regular columns seem to expire and reload properly without issue.
>>>  (Is that what you're asking?)
>>>
>>> You want me to submit a PR changing:
>>>
>>> if p.expire_on_flush or p.key*not*in state.dict
>>>
>>> to
>>>
>>> if p.expire_on_flush*and*p.key in state.dict*
>>> *
>>>
>>> ?
>>>
>>> (If so, which branch?)
>>>
>>>
>>> On Wednesday, May 10, 2017 at 12:55:45 PM UTC-4, Mike Bayer wrote:
>>>
>>> so you can confirm this is only for custom SQL + column_property(),
>>> not
>>> a regular column right?  definitely a bug for 1.2 if you can post it
>>> up
>>>
>>>
>>> On 05/10/2017 12:37 PM, Kent wrote:
>>>  > I'm thinking that should be
>>>  > *"if p.expire_on_flush and p.key in state.dict"*
>>>  >
>>>  >
>>>  > On Wednesday, May 10, 2017 at 11:35:30 AM UTC-4, Kent wrote:
>>>  >
>>>  > deferred column_properties may be less-efficient subquery
>>> selects
>>>  > (and thus marked deferred).  When a flush occurs that updates
>>> an
>>>  > object, any read-only column_properties are marked as
>>> expired, even
>>>  > if they weren't even loaded.  This means if the object needs
>>> to be
>>>  > refreshed, all these deferred column properties are loaded.
>>>  >
>>>  > We probably want the behavior to only expire read-only
>>> attributes
>>>  > that were actually loaded, right?
>>>  >
>>>  > See attached script.  This behavior is as of 1.1.1
>>>  >
>>>  > Thoughts?
>>>  >
>>>  >
>>>  > --
>>>  > SQLAlchemy -
>>>  > The Python SQL Toolkit and Object Relational Mapper
>>>  >
>>>  > http://www.sqlalchemy.org/
>>>  >
>>>  > To post example code, please provide an MCVE: Minimal, Complete,
>>> and
>>>  > Verifiable Example. See http://stackoverflow.com/help/mcve
>>>  for a full
>>>  > description.
>>>  > ---
>>>  > You received this message because you are subscribed to the Google
>>>  > Groups "sqlalchemy" group.
>>>  > To unsubscribe from this group and stop receiving emails from it,
>>> send
>>>  > an email to sqlalchemy+...@googlegroups.com 
>>>  > .
>>>  > To post to this group, send email to sqlal...@googlegroups.com
>>> 
>>>  > .
>>>  > Visit this group at https://groups.google.com/group/sqlalchemy
>>> .
>>>  > For more options, visit https://groups.google.com/d/optout
>>> .
>>>
>>> --
>>> SQLAlchemy -
>>> The Python SQL Toolkit and Object Relational Mapper
>>>
>>> http://www.sqlalchemy.org/
>>>
>>> To post example code, please provide an MCVE: Minimal, Complete, and
>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>>> description.
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sqlalchemy+unsubscr...@googlegroups.com >> sqlalchemy+unsubscr...@googlegroups.com>.
>>> To post to this group, send email to sqlalchemy@googlegroups.com
>>> .
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
> --
> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> --- You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy/g0QVQ6RbxNU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> 

Re: [sqlalchemy] Re: deferred column_properties should probably not be expired unless they were already loaded

2017-05-10 Thread Kent Bower
If never present in __dict__, why does it need to be marked as expired
after an insert or update?  If not in __dict__ and referenced, isn't won't
it load as whether or not it is marked as expired?


On Wed, May 10, 2017 at 1:48 PM, mike bayer 
wrote:

> nevermind, the issue is at
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3984/deferred
> -column_property-gets-set-to
>
> the fix is not as obvious as that, that particular check is assuming a
> column_property() where its value was never present in __dict__ in the
> first place, so it needs to be marked "expired".
>
>
> On 05/10/2017 01:38 PM, Kent wrote:
>
>> The regular columns seem to expire and reload properly without issue.
>>  (Is that what you're asking?)
>>
>> You want me to submit a PR changing:
>>
>> if p.expire_on_flush or p.key*not*in state.dict
>>
>> to
>>
>> if p.expire_on_flush*and*p.key in state.dict*
>> *
>>
>>
>> ?
>>
>> (If so, which branch?)
>>
>>
>> On Wednesday, May 10, 2017 at 12:55:45 PM UTC-4, Mike Bayer wrote:
>>
>> so you can confirm this is only for custom SQL + column_property(),
>> not
>> a regular column right?  definitely a bug for 1.2 if you can post it
>> up
>>
>>
>> On 05/10/2017 12:37 PM, Kent wrote:
>>  > I'm thinking that should be
>>  > *"if p.expire_on_flush and p.key in state.dict"*
>>  >
>>  >
>>  > On Wednesday, May 10, 2017 at 11:35:30 AM UTC-4, Kent wrote:
>>  >
>>  > deferred column_properties may be less-efficient subquery
>> selects
>>  > (and thus marked deferred).  When a flush occurs that updates
>> an
>>  > object, any read-only column_properties are marked as
>> expired, even
>>  > if they weren't even loaded.  This means if the object needs
>> to be
>>  > refreshed, all these deferred column properties are loaded.
>>  >
>>  > We probably want the behavior to only expire read-only
>> attributes
>>  > that were actually loaded, right?
>>  >
>>  > See attached script.  This behavior is as of 1.1.1
>>  >
>>  > Thoughts?
>>  >
>>  >
>>  > --
>>  > SQLAlchemy -
>>  > The Python SQL Toolkit and Object Relational Mapper
>>  >
>>  > http://www.sqlalchemy.org/
>>  >
>>  > To post example code, please provide an MCVE: Minimal, Complete,
>> and
>>  > Verifiable Example. See http://stackoverflow.com/help/mcve
>>  for a full
>>  > description.
>>  > ---
>>  > You received this message because you are subscribed to the Google
>>  > Groups "sqlalchemy" group.
>>  > To unsubscribe from this group and stop receiving emails from it,
>> send
>>  > an email to sqlalchemy+...@googlegroups.com 
>>  > .
>>  > To post to this group, send email to sqlal...@googlegroups.com
>> 
>>  > .
>>  > Visit this group at https://groups.google.com/group/sqlalchemy
>> .
>>  > For more options, visit https://groups.google.com/d/optout
>> .
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com > sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com > sqlalchemy@googlegroups.com>.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> --- You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy/g0QVQ6RbxNU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 

Re: [sqlalchemy] Re: deferred column_properties should probably not be expired unless they were already loaded

2017-05-10 Thread Kent Bower
The regular columns seem to expire and reload properly without issue.  (Is
that what you're asking?)

You want me to submit a PR changing:

if p.expire_on_flush or p.key *not *in state.dict

to

if p.expire_on_flush *and *p.key in state.dict

?

On Wed, May 10, 2017 at 12:55 PM, mike bayer 
wrote:

> so you can confirm this is only for custom SQL + column_property(), not a
> regular column right?  definitely a bug for 1.2 if you can post it up
>
>
> On 05/10/2017 12:37 PM, Kent wrote:
>
>> I'm thinking that should be
>> *"if p.expire_on_flush and p.key in state.dict"*
>>
>>
>> On Wednesday, May 10, 2017 at 11:35:30 AM UTC-4, Kent wrote:
>>
>> deferred column_properties may be less-efficient subquery selects
>> (and thus marked deferred).  When a flush occurs that updates an
>> object, any read-only column_properties are marked as expired, even
>> if they weren't even loaded.  This means if the object needs to be
>> refreshed, all these deferred column properties are loaded.
>>
>> We probably want the behavior to only expire read-only attributes
>> that were actually loaded, right?
>>
>> See attached script.  This behavior is as of 1.1.1
>>
>> Thoughts?
>>
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com > sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com > sqlalchemy@googlegroups.com>.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> --- You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy/g0QVQ6RbxNU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] inspect a scalar relationship property when it is loaded

2016-10-28 Thread Kent Bower
The load() event would also capture if it were subqueryload()ed, right?

And overriding Query.__iter__ wouldn't catch the cases where the scalar is
a "use_get" lookup that was already in session.identity_map...  No good way
to intercept attributes.set_committed() for that, is there?

On Fri, Oct 28, 2016 at 11:21 AM, mike bayer 
wrote:

>
>
> On 10/28/2016 10:48 AM, Kent wrote:
>
>> @validates and 'set' attribute events will only fire when the /user/
>> sets a property (setattr), not when initially loaded by the orm.
>>
>> Is there a way to intercept (for inspection) a scalar relationship
>> property instance when it is loaded?  I don't think the 'load' event
>> will work because I won't know if the current instance is being loaded
>> in the context of setting an instrumented attribute.
>>
>> For example:
>>
>> If I have a parent object, I want to know when parent.childobj is
>> populated by the orm framework (whether on a lazy load or joined load).
>>
>> Is that possible?
>>
>> Thanks in advance!
>>
>
> based on the laws of twos, being that any random issue that has never been
> asked ever for ten years will suddenly be asked for twice within two days,
> the answer is no!   the event proposed would be collection_populate, at the
> same time we'd add scalar_populate, see https://groups.google.com/d/ms
> g/sqlalchemy/IfuUwUgKwW0/5ZgPmpAvBAAJ .
>
> Interim solution is to subclass Query and override __iter__.   The load()
> event will get you eagerly-loaded attributes though, set it on "parent"'s
> class and then inspect "childobj".
>
>
>
>
>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to sqlalchemy+unsubscr...@googlegroups.com
>> .
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> .
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> --- You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy/5cB4T37sWOk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] suppress echo of INSERT/UPDATE large binary data

2016-08-18 Thread Kent Bower
I attempted to search for such an enhancement but obviously failed to find
it.   Patching the changeset in was fairly straightforward.

Thanks very much Jonathan and Mike!!



On Wed, Aug 17, 2016 at 5:41 PM, Mike Bayer 
wrote:

>
>
> On 08/17/2016 01:25 PM, Kent wrote:
>
>> Generally, echo=True for log level of INFO is very helpful for log files.
>>
>> But on INSERT/UPDATE of a Binary column (at least with Oracle, BLOB) it
>> sure would be nice to skip the logging of the sometimes massive binary
>> data.
>>
>> Is this possible?
>>
>
> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.
> html#large-parameter-and-row-values-are-now-truncated-in-log
> ging-and-exception-displays
>
> until you can upgrade write a logging output filter...
>
>
>
>
>> Thanks,
>> Kent
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to sqlalchemy+unsubscr...@googlegroups.com
>> .
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> .
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy/gA3Q4DQWLKw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Help with a custom "seconds_interval()" construct

2016-05-31 Thread Kent Bower
Thanks Mike.

I'm not sure FunctionElement is most appropriate or if Interval() is the
best "type," but for future readers, as a recipe, this seems to work:

class seconds_interval(FunctionElement):
type = Interval()
name = 'seconds'

def __init__(self, *args, **kwargs):
FunctionElement.__init__(self, *args, **kwargs)
self.seconds = args[0]
@compiles(seconds_interval)
def compile_seconds_interval(element, compiler, **kw):
if compiler.dialect.name == 'oracle' and \
   compiler.dialect.server_version_info < (9,):
return "%s/86400" % compiler.process(element.clauses)
else:
# use timedelta as bind param
td = timedelta(seconds=element.seconds)
return compiler.process(bindparam('seconds', value=td, unique=True))



On Sun, May 29, 2016 at 8:25 PM, Mike Bayer 
wrote:

>
>
> On 05/28/2016 09:44 AM, Kent wrote:
>
>> I'm interested in being able to use second time intervals on PostgreSQL,
>> Oracle 8 /and /modern Oracle versions, agnostically.
>>
>> The native python timedelta works great for the postgres and cx_Oracle
>> drivers.  However /cx_Oracle connected to Oracle 8 won't support this/
>> so I'm building my own construct to help.
>>
>> We want support for something like this:
>>
>> /current_timestamp() + timedelta(seconds=15)/
>>
>> (Assume "current_timestamp()" works on all these databases -- I've
>> already built that.)
>>
>> *PostgreSQL*, works great with python's timedelta:
>>
>>> DBSession.execute(select([current_timestamp() +
>
 timedelta(seconds=15)])).scalar()
>> MainThread: 09:15:20,871 INFO  [sqlalchemy.engine.base.Engine] BEGIN
>> (implicit)
>> MainThread: 09:15:20,872 INFO  [sqlalchemy.engine.base.Engine] SELECT
>> localtimestamp + %(param_1)s AS anon_1
>> MainThread: 09:15:20,872 INFO  [sqlalchemy.engine.base.Engine]
>> {'param_1': datetime.timedelta(0, 15)}
>> datetime.datetime(2016, 5, 28, 9, 15, 35, 872999)
>>
>> *Modern Oracle*, works great with python's timedelta:
>>
>>> DBSession.execute(select([current_timestamp() +
>
 timedelta(seconds=15)])).scalar()
>> MainThread: 09:28:15,009 INFO  [sqlalchemy.engine.base.Engine] BEGIN
>> (implicit)
>> MainThread: 09:28:15,010 INFO  [sqlalchemy.engine.base.Engine] SELECT
>> systimestamp + :param_1 AS anon_1 FROM DUAL
>> MainThread: 09:28:15,010 INFO  [sqlalchemy.engine.base.Engine]
>> {'param_1': datetime.timedelta(0, 15)}
>> datetime.datetime(2016, 5, 28, 9, 28, 30, 11530)
>>
>> *Oracle 8: *no
>>
>> DBSession.execute(select([current_timestamp() +
>
 timedelta(seconds=15)])).scalar()
>> MainThread: 08:29:37,659 INFO  [sqlalchemy.engine.base.Engine] SELECT
>> sysdate + :param_1 AS anon_1 FROM DUAL
>> MainThread: 08:29:37,659 INFO  [sqlalchemy.engine.base.Engine]
>> {'param_1': datetime.datetime(1970, 1, 1, 0, 0, 15)}
>> Traceback (most recent call last):
>>   File "", line 1, in 
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/orm/scoping.py",
>> line 114, in do
>> return getattr(self.registry(), name)(*args, **kwargs)
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py",
>> line 804, in execute
>> clause, params or {})
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
>> line 1450, in execute
>> params)
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
>> line 1583, in _execute_clauseelement
>> compiled_sql, distilled_params
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
>> line 1697, in _execute_context
>> context)
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
>> line 1690, in _execute_context
>> context)
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py",
>> line 335, in do_execute
>> cursor.execute(statement, parameters)
>> DatabaseError: (DatabaseError) ORA-00975: date + date not allowed
>>  'SELECT sysdate + :param_1 AS anon_1 FROM DUAL' {'param_1':
>> datetime.datetime(1970, 1, 1, 0, 0, 15)}
>>
>>
>> I've already constructed a class called "seconds_interval" that works
>> for Oracle 8, but I don't know how to use the default behavior when not
>> Oracle 8:
>>
>> *Oracle 8 usage:*
>>
>>> DBSession.execute(select([current_timestamp() +
>
 *seconds_interval(15)*])).scalar()
>> MainThread: 08:37:06,539 INFO  [sqlalchemy.engine.base.Engine] SELECT
>> sysdate +*:seconds_1/86400 *AS anon_1 FROM DUAL
>> MainThread: 08:37:06,539 INFO  [sqlalchemy.engine.base.Engine]
>> {'seconds_1': 15}
>> 

Re: [sqlalchemy] Guaranteeing same connection for scoped session

2016-04-14 Thread Kent Bower
Yeah, it seems to me that if you pass a *specific connection* to a
sessionmaker for some (whatever) reason, that sessionmaker shouldn't ever
silently take a different one.

I'll need to work on detecting or sabotaging new connections from a
sessionmaker which was passed a specific connection.  (I know the obvious
question might be: why even use a sessionmaker/scoped session? and the
answer is that those parts are all well buried in the framework, along with
scopedsession.remove() call, so under certain circumstances only, I want
the session to be guaranteed only one.)

Any help on how to sabotage a sessionmaker/scoped_session to prevent later
creation of a new connection would be appreciated.  Thanks!

On Thu, Apr 14, 2016 at 12:06 AM, Jonathan Vanasco 
wrote:

>
>
> On Wednesday, April 13, 2016 at 7:25:16 PM UTC-4, Mike Bayer wrote:
>>
>> Well scopedsession.remove throws away the session, so yeah either don't
>> call that , or set up the connection immediately on the next session.
>
>
> I thought "this is obvious, the session is closed on `remove`", but then
> dug into the docs -- and I can see how this is misleading.
>
> The narrative docs (
> http://docs.sqlalchemy.org/en/latest/orm/session_api.html#session-and-sessionmaker
> )
>
> Session = sessionmaker()
> # bind an individual session to a connectionsess = Session(bind=connection
>
>
> It's easy to miss the importance of "individual"
>
> And then the API makes it seem like a bind(connection) would persist via
> sessionmaker.
>
>
> http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.sessionmaker.__init__
>
>- *bind*
>
> 
>  –
>a Engine
>
> 
>  or
>other Connectable
>
> 
>  with
>which newly createdSession
>
> 
>  objects
>will be associated.
>
>
> http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.Session
>
>
>- *bind*
>
> 
>  –
>An optional Engine
>
> 
> or Connection
>
> 
>  to
>which this Session should be bound. When specified, all SQL operations
>performed by this session will execute via this connectable.
>
>
> Unless one were more familiar, the `remove` behavior wouldn't be
> apparent... and the notion of a particular connection being bound to a
> Session Maker might seem like a good thing (it's actually not, because you
> would inherently preclude the utility of connection pools , aside from
>  other effects)
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/WcdRsvBTozk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Guaranteeing same connection for scoped session

2016-04-13 Thread Kent Bower
Will the connection.info dict always be new if a new
underlying raw connection has been grabbed? (Such that I can reliably
detect this situation?)



On Wednesday, April 13, 2016, Mike Bayer <clas...@zzzcomputing.com> wrote:

> Well scopedsession.remove throws away the session, so yeah either don't
> call that , or set up the connection immediately on the next session.
>
> On Wednesday, April 13, 2016, Kent Bower <k...@bowermail.net
> <javascript:_e(%7B%7D,'cvml','k...@bowermail.net');>> wrote:
>
>> About a year ago you helped me ensure my scoped session gets the same
>> connection to the database, which might be important.
>>
>> I found out using "bind=connection" doesn't guarantee the session_maker
>> uses that connection if something went wrong with the session and
>> ScopedSession.remove() was called. Is there a way to guarantee this?
>>
>> See attached script that fails on version 1.0.12
>>
>> Is this the intended behavior when sessionmaker has a specific connection
>> as bind?
>>
>>
>>
>> On Mon, Mar 23, 2015 at 12:40 PM, Michael Bayer <mike...@zzzcomputing.com
>> > wrote:
>>
>>>
>>>
>>> Kent <jkentbo...@gmail.com> wrote:
>>>
>>> > In cases where we interact with the database session (a particular
>>> Connection) to, for example, obtain an application lock which is checked
>>> out from database for the lifetime of the database session (not just the
>>> duration of a transaction), it is important that I guarantee future scoped
>>> session instances get the same connection (and, for example, the
>>> pool_recycle or something else has thrown out that connection and grabbed a
>>> new one).
>>> >
>>> > Please advise me where I can best implement this guarantee.  A Session
>>> subclass's connection() method seems it might be the appropriate place, but
>>> let me know if there is a better recipe.
>>>
>>> you’d want to create that Session associated with the Connection
>>> directly:
>>>
>>> my_session = scoped_session(bind=some_connection)
>>>
>>> then of course make sure you .close() it and .close() the connection at
>>> the end of the use of that session.
>>>
>>>
>>>
>>> >
>>> > The Session.connection() method's docs say:
>>> > "If this Session is configured with autocommit=False, either the
>>> Connection corresponding to the current transaction is returned, or if no
>>> transaction is in progress, a new one is begun and the Connection returned
>>> (note that no transactional state is established with the DBAPI until the
>>> first SQL statement is emitted)."
>>> >
>>> > If the session is one registered in my scoped registry, I'd like to
>>> always return the same connection to guarantee I am using the one with the
>>> database-side checked-out application lock.
>>> >
>>> > What's my best option?
>>> >
>>> > Thanks much!
>>> >
>>> > --
>>> > You received this message because you are subscribed to the Google
>>> Groups "sqlalchemy" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>>> > To post to this group, send email to sqlalchemy@googlegroups.com.
>>> > Visit this group at http://groups.google.com/group/sqlalchemy.
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>> --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "sqlalchemy" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/sqlalchemy/WcdRsvBTozk/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https:/

Re: [sqlalchemy] Guaranteeing same connection for scoped session

2016-04-13 Thread Kent Bower
About a year ago you helped me ensure my scoped session gets the same
connection to the database, which might be important.

I found out using "bind=connection" doesn't guarantee the session_maker
uses that connection if something went wrong with the session and
ScopedSession.remove() was called. Is there a way to guarantee this?

See attached script that fails on version 1.0.12

Is this the intended behavior when sessionmaker has a specific connection
as bind?



On Mon, Mar 23, 2015 at 12:40 PM, Michael Bayer 
wrote:

>
>
> Kent  wrote:
>
> > In cases where we interact with the database session (a particular
> Connection) to, for example, obtain an application lock which is checked
> out from database for the lifetime of the database session (not just the
> duration of a transaction), it is important that I guarantee future scoped
> session instances get the same connection (and, for example, the
> pool_recycle or something else has thrown out that connection and grabbed a
> new one).
> >
> > Please advise me where I can best implement this guarantee.  A Session
> subclass's connection() method seems it might be the appropriate place, but
> let me know if there is a better recipe.
>
> you’d want to create that Session associated with the Connection directly:
>
> my_session = scoped_session(bind=some_connection)
>
> then of course make sure you .close() it and .close() the connection at
> the end of the use of that session.
>
>
>
> >
> > The Session.connection() method's docs say:
> > "If this Session is configured with autocommit=False, either the
> Connection corresponding to the current transaction is returned, or if no
> transaction is in progress, a new one is begun and the Connection returned
> (note that no transactional state is established with the DBAPI until the
> first SQL statement is emitted)."
> >
> > If the session is one registered in my scoped registry, I'd like to
> always return the same connection to guarantee I am using the one with the
> database-side checked-out application lock.
> >
> > What's my best option?
> >
> > Thanks much!
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > Visit this group at http://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/WcdRsvBTozk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.exc import OperationalError

eng = create_engine('postgresql://salespylot:salespylot@localhost:5444/sqla', 
echo=True)
conn=eng.connect()
# bind to specific connection
Session = scoped_session(sessionmaker(bind=conn))

pid = conn.execute("select pg_backend_pid()").scalar()
raw_conn_addr = id(Session.connection().connection.connection)

metadata = MetaData(eng)
rocks_table = Table("rocks", metadata,
Column("id", Integer, primary_key=True),
)
class Rock(object):
pass
mapper(Rock, rocks_table)
metadata.create_all()


Session.query(Rock).all()

# See if normally get same connection
Session.remove()
Session.query(Rock).all()

# all is good: we got original connection again:
assert pid == Session.connection().execute("select pg_backend_pid()").scalar()
assert raw_conn_addr == id(Session.connection().connection.connection)

# something drastic happens to conn
aux_conn=eng.connect()
aux_conn.execute(text("select pg_terminate_backend(:pid)"), 
pid=pid)

try:
Session.query(Rock).all()
except OperationalError as e:
print e
# Error, framework automatically may issue this:
Session.remove()

Session.query(Rock).all()

# New connection has been created, didn't anticipate this...
newpid = Session.connection().execute("select pg_backend_pid()").scalar()
new_addr = id(Session.connection().connection.connection)
print "%d != %d; %d != %d" % (pid, newpid, 

Re: [sqlalchemy] Undefer Hybrid Attributes

2013-03-07 Thread Kent Bower
That makes sense,
Thanks,
Kent

On Mar 7, 2013, at 12:09 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 The hybrid attribute is a Python function that invokes when it's called.  So 
 it doesn't make sense for it to be a column property since there is no 
 attribute to be populated.  Undeferred also doesn't make any sense because 
 the hybrid already calls a local in-Python function when accessed at the 
 instance level, not a database call.
 
 A traditional column_property() can be deferred or undeterred, and when 
 called at the instance level will emit SQL to the database.
 
 If you have an attribute that should only populate via SQL, then you need to 
 just use a column_property().
 
 
 
 On Mar 7, 2013, at 11:42 AM, Kent jkentbo...@gmail.com wrote:
 
 I suppose what I'm really after is a column_property (for class level) and 
 plain descriptor (for instance level), which is exactly what Hybrid 
 attributes are meant to be, but I wanted them to be part of the mapper and 
 undeferred in some cases.
 
 On Thursday, March 7, 2013 11:36:37 AM UTC-5, Kent wrote:
 
 I notice that Hybrid Attributes don't show up as mapper properties (since 
 they are class wide instead of mapper specific, I suppose).  I couldn't 
 find documentation on whether I can undefer these?  Or can I create a 
 synonym or column_property from a hybrid attribute in the mapper?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 -- 
 You received this message because you are subscribed to a topic in the Google 
 Groups sqlalchemy group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/sqlalchemy/kO6KS88-2xU/unsubscribe?hl=en.
 To unsubscribe from this group and all its topics, send an email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Apparently redundant subqueryloads with single table inheritance

2012-06-05 Thread Kent Bower

Thank you!

On 6/5/2012 4:41 PM, Michael Bayer wrote:


On Jun 5, 2012, at 4:21 PM, Kent wrote:

I am subquery loading some related properties for a polymorphic 
inheritance use case similar to the script attached.  SQLA seems to 
be issuing several extra queries that I didn't expect and don't think 
it needs.  In the attached example, I expected a total of 4 queries 
issued for the session.query().get(), but I there are 11 queries 
instead, most of them redundant.


this is ticket 2480 and has been fixed for several weeks.

0.7.8 is due as we have a memory leak issue but i wont have time for a 
release until perhaps late weekend.









Any ideas?

Thanks,
Kent

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/vC69eQMhv10J.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

subquery_polymorphic.py


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sqlalchemy supports INSERT INTO ... (SELECT .. ) ?

2012-03-30 Thread Kent Bower
Thanks for pointing me there.  As an aside, the recipe would be more 
bulletproof if it specified the columns (order).  Currently, it assumes 
sqlalchemy knows the order of the columns in the database, which it may not.


Thanks again!

On 3/30/2012 6:40 PM, Michael Bayer wrote:
its not built in at the moment but there is a recipe in the docs for 
@compiles


http://docs.sqlalchemy.org/en/latest/core/compiler.html#compiling-sub-elements-of-a-custom-expression-construct

kind of amazing nobody's yet contributed a patch for this, I show 
people that recipe for a few years now :)



On Mar 30, 2012, at 6:05 PM, Kent wrote:


Couldn't find answer in docs, does sqlalchemy support:

INSERT INTO ... (SELECT .. )

instead of

INSERT INTO ... VALUES...

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/IIxzN-i9740J.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sqlalchemy supports INSERT INTO ... (SELECT .. ) ?

2012-03-30 Thread Kent Bower

Yeah, I knew it was a hint... ;)   So many great ideas, so little time. :(

On 3/30/2012 6:58 PM, Michael Bayer wrote:

so many great ideas for the eventual contributor ! ;)

ideally it would be a method on insert() itself, 
table.insert().from(select([...])..., cols=('x', 'y', 'z')) or 
something like that.   Maybe people have suggestions.




On Mar 30, 2012, at 6:43 PM, Kent Bower wrote:

Thanks for pointing me there.  As an aside, the recipe would be more 
bulletproof if it specified the columns (order).  Currently, it 
assumes sqlalchemy knows the order of the columns in the database, 
which it may not.


Thanks again!

On 3/30/2012 6:40 PM, Michael Bayer wrote:
its not built in at the moment but there is a recipe in the docs for 
@compiles


http://docs.sqlalchemy.org/en/latest/core/compiler.html#compiling-sub-elements-of-a-custom-expression-construct

kind of amazing nobody's yet contributed a patch for this, I show 
people that recipe for a few years now :)



On Mar 30, 2012, at 6:05 PM, Kent wrote:


Couldn't find answer in docs, does sqlalchemy support:

INSERT INTO ... (SELECT .. )

instead of

INSERT INTO ... VALUES...

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/IIxzN-i9740J.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Single table inheritance

2012-03-21 Thread Kent Bower

That will work for me, thanks!

P.S. make a note that the doc statement that it will be a future release 
should be updated.


On 3/21/2012 10:04 AM, Michael Bayer wrote:

also polymorphic_on can be any SQL expression in 0.7, like a CASE statement if 
you wanted.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Half merge/save cascade support for M:N relationships

2012-02-29 Thread Kent Bower
I hadn't really thought the save-update through, my real interest was 
regarding merge.  For save-update it may not make as much sense.



On 2/29/2012 10:55 AM, Michael Bayer wrote:
I get what that would do for merge and might not be a big deal, what 
would it do for save-update?


Sent from my iPhone

On Feb 29, 2012, at 8:55 AM, Kent jkentbo...@gmail.com 
mailto:jkentbo...@gmail.com wrote:


I suspect this doesn't interest you so much, and no offense taken if 
not, but have you ever considered supporting the idea of a half 
merge/save-update cascade for many to many relationships?
The use case is where I want to merge/save-update to the secondary 
table only (collection status), but I don't want to merge/save 
changes to the actual related objects.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/lvDys29gJncJ.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] expunge cascade behavior change

2012-01-31 Thread Kent Bower
Somewhere between 0.6.4 and 0.7.5, the expunge cascade behavior 
changed.  Can you help me understand what changed/point me to the 
ticket?  The attached script assertions succeed in 0.6.4 but the last 
one fails in 0.7.5.  It doesn't seem wrong, but I'm wondering what the 
behavior was defined as previously and if that was considered a bug, etc.


Thanks,
Kent

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.orm.util import has_identity

engine = create_engine('sqlite:///', echo=True)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)

rocks_table = Table(rocks, metadata,
Column(id, Integer, primary_key=True),
)

bugs_table = Table(bugs, metadata,
Column(id, Integer, primary_key=True),
Column(rockid, Integer, ForeignKey('rocks.id'),),
)

class Rock(object):
def __repr__(self):
return 'Rock@%d: id=[%s] in session:[%s] has_identity[%s]' % (id(self), self.__dict__.get('id'), self in session, has_identity(self))

class Bug(object):
def __repr__(self):
return 'Bug@%d: id=[%s] rockid[%s] with rock[%s]' % (id(self), self.__dict__.get('id'), self.__dict__.get('rockid'), self.__dict__.get('rock','not set'))


mapper(Rock, rocks_table,
properties={'bugs': relationship(Bug,
cascade='all,delete-orphan', 
backref=backref('rock',cascade='refresh-expire,expunge'))
})

mapper(Bug, bugs_table)


metadata.create_all()

session = Session()


# add a rock and bug
rock=Rock()
rock.id = 0
bug=Bug()
bug.id = 0
rock.bugs.append(bug)
session.add(rock)

session.commit()

# later... new session
session = Session()
rock = session.query(Rock).get(0)
rock.bugs.append(Bug())

assert rock in session

rock.bugs = []

assert rock in session


[sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?

2012-01-26 Thread Kent Bower
I think I understand why, during a flush(), if I use 
session.query().get() for an item that was just added during this flush, 
I don't get the persistent object I might expect because the session 
still has it as pending even though, logically, it is already persistent.


I don't suppose you have any desire to support that, huh?  The use case 
would be related to the future ticket 
http://www.sqlalchemy.org/trac/ticket/1939 (and 
http://www.sqlalchemy.org/trac/ticket/2350).


Attached is a script demonstrating the issue I've hit.  I can work 
around it with some difficulty, but I wanted your input and thoughts.


Thanks,
Kent

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

# Show what happens when we use session.query().get()
# during a flush to load an object that is being inserted during the same flush
# Instead of getting what was the pending object, we get a new copy of what 
# the orm thinks is persistent and then it is detached after the flush finishes

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy import event
from sqlalchemy.orm.util import has_identity

engine = create_engine('sqlite:///', echo=True)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)

rocks_table = Table(rocks, metadata,
Column(id, Integer, primary_key=True),
)

bugs_table = Table(bugs, metadata,
Column(id, Integer, primary_key=True),
Column(rockid, Integer, ForeignKey('rocks.id'),),
)

class Rock(object):
def __repr__(self):
return 'Rock@%d: id=[%s] in session:[%s] has_identity[%s]' % (id(self), self.__dict__.get('id'), self in session, has_identity(self))

class Bug(object):
def __repr__(self):
return 'Bug@%d: id=[%s] rockid[%s] with rock[%s]' % (id(self), self.__dict__.get('id'), self.__dict__.get('rockid'), self.__dict__.get('rock','not set'))


class BugAgent(MapperExtension):
def before_update(self, mapper, connection, instance):
assert 'rock' not in instance.__dict__
print \n\n during flush
# after http://www.sqlalchemy.org/trac/ticket/2350, we could just reference like this:
#instance.rock
instance.rock = session.query(Rock).get(instance.rockid)
#
# we just loaded a Rock that was just inserted during this flush, so
# it looks persistent to the orm, but the orm also has this object
# already (still pending).  After the flush is done,
# the pending object will be the only one in the session and the 
# object we just loaded here will be removed from the session (detached)
# 
print \n\n*before_update: %r\n % instance
assert 'rock' in instance.__dict__


mapper(Rock, rocks_table,
properties={'bugs': relationship(Bug,
cascade='all,delete-orphan', 
backref='rock')
})

mapper(Bug, bugs_table, extension=BugAgent())

@event.listens_for(Bug.rockid, 'set')
def autoexpire_rock_attribute(instance, value, oldvalue, initiator):
if value != oldvalue:
if instance in session and has_identity(instance):
assert 'rock' in instance.__dict__
print \n\nBug.rockid changing from [%s] to [%s]... % (oldvalue, value)
print **about to expire rock for %r % instance
session.expire(instance, ['rock'])
print **expired rock for %r\n % instance
assert 'rock' not in instance.__dict__


metadata.create_all()

session = Session()


# add a rock and bug
rock=Rock()
rock.id = 0
bug=Bug()
bug.id = 0
rock.bugs.append(bug)
session.add(rock)

session.commit()

# later... new session
session = Session()


b1 = Bug()
b1.id = 0

rock=Rock()
rock.id = 1
rock.bugs.append(b1)

print \n\nmerge start\n
merged = session.merge(rock)
print \n\nmerge end\n
print flush\n
session.flush()

assert 'rock' in merged.bugs[0].__dict__

# show that the pending object has become persistent
print \n\nsession's pending obj turned persistent: %r % session.query(Rock).get(1)

# show that the object we loaded has been detached from the session
print 'merged.bugs[0].rock (copy of same object, no longer in session): %r' % merged.bugs[0].rock



Re: [sqlalchemy] c extensions built?

2012-01-12 Thread Kent Bower

Yeah, just the two cresultproxy.so and cprocessors.so, right?

On 1/12/2012 3:34 PM, Michael Bayer wrote:

when you do the setup.py the log messages say so.

Otherwise you'd look where sqlalchemy was installed and check if you see .so 
files.


On Jan 12, 2012, at 12:35 PM, Kent wrote:


What is the easiest way to confirm that my installation has compiled/
is using the c extensions?

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-10 Thread Kent Bower

funny story, here's where it was added:  
http://www.sqlalchemy.org/trac/ticket/1910 which is essentially your ticket !   
:)

I just double checked and I had patched in rfde41d0e9f70 
http://www.sqlalchemy.org/trac/changeset/fde41d0e9f70/.  Is there 
another commit that went against 1910?  For example, was there logic in 
the attachment /load_on_fks.patch/ 
http://www.sqlalchemy.org/trac/attachment/ticket/1910/load_on_fks.patch that 
was committed?


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-10 Thread Kent Bower

Thank you very much!

On 1/10/2012 11:47 AM, Michael Bayer wrote:

Code wasn't covered and is a regresssion, fixed in rd6e321dc120d.


On Jan 10, 2012, at 10:58 AM, Kent wrote:


Mike,

Old code:
==
def visit_bindparam(bindparam):
if bindparam.key in bind_to_col:
bindparam.value = lambda:
mapper._get_state_attr_by_column(
state, dict_,
bind_to_col[bindparam.key])
==
New code (note that 'value' is now 'callable'):
def visit_bindparam(bindparam):
if bindparam._identifying_key in bind_to_col:
bindparam.callable = \
lambda: mapper._get_state_attr_by_column(
state, dict_,

bind_to_col[bindparam._identifying_key])
==

Now look at sql.util.py:
==
def bind_values(clause):
v = []
def visit_bindparam(bind):
value = bind.value

# evaluate callables
if callable(value):
value = value()

v.append(value)

visitors.traverse(clause, {}, {'bindparam':visit_bindparam})
return v
==

Aren't we missing this: ?

  if bind.callable:
value = bind.callable()

I think this is why it isn't loading the way it used to.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Kent Bower

On 1/9/2012 2:33 PM, Michael Bayer wrote:

On Jan 9, 2012, at 2:30 PM, Kent wrote:


i guess the patch is interacting with that load_on_pending stuff, which I probably 
added for you also.  It would be nice to really work up a new SQLAlchemy feature: 
detached/transientobject loading document that really describes what it is we're trying to do 
here.If you were to write such a document, what example would you give as the rationale ?I 
know that's the hard part here, but this is often very valuable, to look at your internal system 
and genericize it into something universally desirable.

As far as such a document, would you want a trac ticket opened with my
use case in a generalized form where others may likely have the same
use case?

Hoping to not upset you here.:

My AttributeImpl.callable_ hack to set a transient state's
session_id, load the relationship, and then set it back to None works
for m2o but when it needs to load a collection (or it can't use get()
I presume), then I am hitting this return None:

class LazyLoader(AbstractRelationshipLoader):
def _load_for_state(self, state, passive):
...
...
lazy_clause = strategy.lazy_clause(state)

if pending:
bind_values = sql_util.bind_values(lazy_clause)
if None in bind_values:
return None###

q = q.filter(lazy_clause)

that means some of the columns being linked to the foreign keys on the target are None.  
If you want your lazyload to work all the attributes need to be populated.   If you're 
hitting the get committed  thing, and the attributes are only pending, then 
that's what that is.


But this changed from 0.6.4?

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Kent Bower

On 1/9/2012 5:33 PM, Michael Bayer wrote:

On Jan 9, 2012, at 2:36 PM, Kent Bower wrote:


that means some of the columns being linked to the foreign keys on the target are None.  
If you want your lazyload to work all the attributes need to be populated.   If you're 
hitting the get committed  thing, and the attributes are only pending, then 
that's what that is.


But this changed from 0.6.4?

funny story, here's where it was added:  
http://www.sqlalchemy.org/trac/ticket/1910 which is essentially your ticket !   
:)

Except that my patched version of 0.6.4 (which I was referring to) 
already has that change from that ticket patched in.  It must be 
something else, I'm still looking...


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] 0.7 event migration

2011-12-24 Thread Kent Bower

Right.  And reconstruct_instance() was renamed load()?

On 12/24/2011 5:56 PM, Michael Bayer wrote:

On Dec 24, 2011, at 10:04 AM, Kent wrote:


As the migration guide suggests, I'd like to embrace the events API.
Is mapper event load() invoked at exactly the same place as the
deprecated reconstruct_instance() ?

yeah nothing has been moved.   All the places where the internals would call 
XXXExtension.xxx_event() were just replaced with self.dispatch.xxx_event(), and 
the old Extension classes are invoked via an adapter to the new system.   All 
unit tests for the extension system remain in place and haven't been modified.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Interpretation of SAWarning: No ForeignKey objects were present in secondary table 'post_keywords'...

2011-04-19 Thread Kent Bower
So, what would you suggest the right answer is in this case?  In the 
real world scenario we have a potentially good reason to avoid the 
association proxy approach since that involves an additional table which 
may be unneeded.


Is the best answer just to provide the foreign keys as I did?


On 4/19/2011 11:47 AM, Michael Bayer wrote:

On Apr 19, 2011, at 9:04 AM, Kent wrote:


I'm getting an SAWarning (0.6.4 and also 0.7b5dev) which has a message
that confuses me, so I've duplicated the problem with a script:
===

mapper(PostSpotLight, spotlights_table,
properties = {'postkeywords':
relationship(Keyword, secondary=post_keywords,

primaryjoin=spotlights_table.c.post_id==post_keywords.c.post_id)}
)

[...]/sqlalchemy-default/lib/sqlalchemy/orm/properties.py:900:
SAWarning: No ForeignKey objects were present in secondary table
'post_keywords'.  Assumed referenced foreign key columns
'post_keywords.keyword_id', 'post_keywords.post_id' for join condition
'spotlights.post_id = post_keywords.post_id' on relationship
PostSpotLight.postkeywords
  eq_pairs = self._sync_pairs_from_join(self.primaryjoin, True)

Why is the complaint that there are No ForeignKey objects were present
in secondary table 'post_keywords', when clearly there are?

Well this is the message knowing something was wrong but not expressing it in a way that I expected when I 
wrote it.  post_keywords has no foreign keys that refer to the spotlights table.if a 
primaryjoin condition is given, it only cares about foreign keys that express components of the join 
condition.   this used to be an error condition but eventually I figured that things would just 
work if I assumed the cols in post_keywords were FKs, after I saw someone confused by placing a 
secondary table that in fact had no FKs on it.


***Furthermore, if I pass the relationship this:

foreign_keys=[post_keywords.c.post_id,post_keywords.c.keyword_id]

then the warning goes away, but all I've done is specify the very
foreign_keys that already exist.***

well in this case you give it some columns, but not what they reference to.  So it sees 
post_id and it says, oh, thats the FK that points to spotlights.post_id.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Universal way to process inserting or updating values

2010-11-04 Thread Kent Bower
If I intercept strings that are empty and replace with None, is there 
potential problems because the database record and the python object are 
out of sync?  Thereafter, will sqla believe the column value has 
changed and try to write again on next flush()?




On 11/4/2010 11:42 AM, Michael Bayer wrote:

On Nov 4, 2010, at 11:16 AM, Kent wrote:


We are writing an application that can run on PostgreSQL or Oracle.
Since postgres treats NULL and '' (empty string) differently, while
Oracle treats '' as NULL, this can cause subtle behavior differences
based on the underlying database.

Can you think of a way I could easily intercept all UPDATE and INSERT
column values, and, if the value is the empty string, replace with
None/null?

Thanks in advance if you can point me in the right direction.

the current method on this is the ConnectionProxy:

http://www.sqlalchemy.org/docs/core/interfaces.html#sqlalchemy.interfaces.ConnectionProxy




Kent

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Universal way to process inserting or updating values

2010-11-04 Thread Kent Bower
Actually, modifying the parameter in cursor_execute() for Postgres 
behaves exactly as Oracle does now in sqla without and proxy.


sqla thinks it is writing an '' to the database, but Oracle is actually 
changing that to NULL, so it behaves the same.


after and update to '', if you ask field is None you get False.  But 
if you expire the attribute and then ask again you get True.



On 11/4/2010 5:07 PM, Michael Bayer wrote:

On Nov 4, 2010, at 4:06 PM, Kent Bower wrote:


If I intercept strings that are empty and replace with None, is there potential problems 
because the database record and the python object are out of sync?  
Thereafter, will sqla believe the column value has changed and try to write again on next 
flush()?

It would seem likely that small issues could arise, since you're effectively 
breaking round trips.It would not perceive any net changes to columns that 
haven't been touched, though if you set '' on an attribute that was already '', 
that will obviously generate a change event since it would have loaded the 
value as a None.

For control over round trips you could override String with a TypeDecorator.





On 11/4/2010 11:42 AM, Michael Bayer wrote:

On Nov 4, 2010, at 11:16 AM, Kent wrote:


We are writing an application that can run on PostgreSQL or Oracle.
Since postgres treats NULL and '' (empty string) differently, while
Oracle treats '' as NULL, this can cause subtle behavior differences
based on the underlying database.

Can you think of a way I could easily intercept all UPDATE and INSERT
column values, and, if the value is the empty string, replace with
None/null?

Thanks in advance if you can point me in the right direction.

the current method on this is the ConnectionProxy:

http://www.sqlalchemy.org/docs/core/interfaces.html#sqlalchemy.interfaces.ConnectionProxy




Kent

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Universal way to process inserting or updating values

2010-11-04 Thread Kent Bower

P.S. Thanks again very much

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-10 Thread Kent Bower
 I'm wondering if you think my use case is one you would like to 
support, or if you think it is sort of off on its own (I could tell I 
wasn't explaining our use case extremely well)...


Regardless, I am setting the InstrumentedAttribute's implementation's 
callable_ so I have control over whether to load related objects on 
transient/pending objects.


But the framework doesn't exactly support this in some of its 
assumptions about the callable_.


For example, for Collections, it doesn't send passive=PASSIVE_NO_FETCH 
during a set() when it looks for the old value:


old = self.get(state, dict_)

whereas, for ScalarObjectAttributeImpl(), the set() method passes 
passive=PASSIVE_NO_FETCH unless active_history is True.


def set(self, state, dict_, value, initiator, passive=PASSIVE_OFF):
Set a value on the given InstanceState.

`initiator` is the ``InstrumentedAttribute`` that initiated the
``set()`` operation and is used to control the depth of a circular
setter operation.


if initiator is self:
return

if self.active_history:
old = self.get(state, dict_)
else:
old = self.get(state, dict_, passive=PASSIVE_NO_FETCH)

value = self.fire_replace_event(state, dict_, value, old, 
initiator)

dict_[self.key] = value


Is it appropriate to implement active_history on collections and pass 
passive?
I can work around this somehow if you don't wish to make that change..., 
but let me know.






On 9/8/2010 3:34 PM, Kent Bower wrote:
I've got a recipe for what will work well for us.  I imagine it could 
be useful for others, although I left out the actual serialization 
mechanism, since that will likely be very project specific.


I'd be happy to put this on the wiki, but if you wanted to look it 
over first, you are more than welcome (I'd prefer your feedback).  If 
you are busy, I can just post it and hope someone may find it useful.


Thank again for your help,
Kent


On 9/7/2010 7:28 PM, Michael Bayer wrote:


On Sep 7, 2010, at 6:41 PM, Kent Bower wrote:


Two items:

 * How does the orm currently determine whether it is safe to try 
get() (e.i. there are no funny join conditions)?  If you point me 
to the function where decision takes place, I can probably answer 
this myself


it compares the join condition of the relationship() to that of the 
clause which the Mapper uses when it issues get(), then stores that 
away as a flag for future consultation.   It's very unusual for a 
many-to-one relationship to be based on something other than a simple 
foreign-key-primary key relationship, though.





 * When I build up the primary key from the foreign key, is there an 
efficient way to build a composite key in the correct order to pass 
to get()?  (I thought maybe synchronize_pairs, but that maybe has 
to do with getting the direction consistent instead?)


Well if you aren't using any composite primary keys in many-to-ones, 
you wouldn't even have to worry about this.   Otherwise, the two 
collections to correlate would be property.local_remote_pairs and 
property.mapper.primary_key.   Perhaps make a dictionary out of 
dict([(r, l) for l, r in prop.local_remote_pairs]) and your PK value 
would be [getattr(instance, 
prop.parent.get_property_by_column(mydict[p]).key) for p in 
property.mapper.primary_key].


Or if you want to get ambitious you can just copy roughly whats in 
strategies.py on line 605 but then you're digging into 
internalsand looking at that now I'm wondering if 
strategy._equated_columns is really different than local_remote_pairs 
at all...






Thanks again, you've been much help!



On 9/7/2010 5:03 PM, Michael Bayer wrote:


On Sep 7, 2010, at 4:38 PM, Kent Bower wrote:

Don't want to strangle me, but when the orm (lazy)loads a MANYTONE 
object, it doesn't go to the database if the object is in the 
session.  Can I get with_parent() to behave this way, or would I 
need to specifically build up the primary key of the related 
object and call query.get()?


the latter.   You can use get() for all many to ones if you aren't 
using any funny join conditions.







On 9/7/2010 10:25 AM, Michael Bayer wrote:


On Sep 7, 2010, at 10:12 AM, Kent Bower wrote:

Mike, in your proof of concept, when __getstate__ detected 
transient, why did you need to make a copy of self.__dict__? 
self.__dict__.copy()


i was modifying the __dict__ from what would be expected in a 
non-serialized object, so that was to leave the original object 
being serialized unchanged.






On 9/6/2010 2:35 PM, Michael Bayer wrote:

On Sep 6, 2010, at 2:11 PM, Kent Bower wrote:


Also, I was hoping you would tell me whether this would be a candidate for 
subclassing InstrumentedAttribute?  Would that make more sense or providing custom 
__getstate__  __setstate__ ?

__getstate__ / __setstate__ are pretty much what I like to use for pickle 
stuff, unless some exotic situation makes me have

Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-10 Thread Kent Bower
 I've got a fix for our project.  Python is really cool about letting 
you reassign methods and functions, so I just reassigned 
CollectionAttributeImpl._set_iterable to my own function.


The point is, for my sake, don't worry about a public API, unless others 
also ask about it...


Thanks for your help.



On 9/10/2010 3:27 PM, Michael Bayer wrote:


Sent from my iPhone

On Sep 10, 2010, at 2:11 PM, Kent Bowerk...@retailarchitects.com  wrote:


I'm headed that direction now, thanks.

I didn't find anything on the wiki for how to plug in a subclassed 
CollectionAttributeImpl, for example.  I could hack it, but is there a public 
or preferred way?

Well this is all entirely uncharted territory.  Ideally there would be solid 
public Apis for this stuff but especially in highly customized situations like 
this, they need to be specified very carefully.  It seems at least that the 
lazy callables techniques might be further exposed.




this is a lot to review, and I'll try to get to it, but since you're digging 
into internals anyway have you considered just creating your own AttributeImpl 
subclasses ?   You can then implement get()/set() and make it do whatever you'd 
like.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-10 Thread Kent Bower
 Actually, my coworker and I were discussing, if the foreign key is 
specified, whether transient, pending (with autoflush off), or 
persistent, you intuitively *expect* that referencing the attribute will 
load the persistent related object.


The difficultly for sqlalchemy is that you have no idea what session to 
pull it from.  However, for many users, there is only one 
(Scoped)Session at a time, so there is no ambiguity.  In that case, I'd 
argue the correct behavior is load the object/collection, if possible


Thanks, it seems to be working now for me.

(The other expected behavior, which I hope to tackle, is that if you 
change a foreign key reference and the ORM *knows* it is a foreign key 
reference to a loaded relation, it should expire that relation... I've 
asked you about that and you said it was a big undertaking... I wish I 
understood why better, because my plan is to implement this for my 
project and my hopes are that whatever has stopped you from doing so for 
sqlalchemy won't be an issue for our use case)





On 9/10/2010 5:07 PM, Michael Bayer wrote:

I almost needed the exact same feature you have the other day.So I wouldn't 
close the book on it.   I just know that as default behavior, or even readily 
switchable behavior, non-invested users get confused rather quickly.


On Sep 10, 2010, at 4:21 PM, Kent Bower wrote:


I've got a fix for our project.  Python is really cool about letting you 
reassign methods and functions, so I just reassigned 
CollectionAttributeImpl._set_iterable to my own function.

The point is, for my sake, don't worry about a public API, unless others also 
ask about it...

Thanks for your help.



On 9/10/2010 3:27 PM, Michael Bayer wrote:

Sent from my iPhone

On Sep 10, 2010, at 2:11 PM, Kent Bowerk...@retailarchitects.com   wrote:


I'm headed that direction now, thanks.

I didn't find anything on the wiki for how to plug in a subclassed 
CollectionAttributeImpl, for example.  I could hack it, but is there a public 
or preferred way?

Well this is all entirely uncharted territory.  Ideally there would be solid 
public Apis for this stuff but especially in highly customized situations like 
this, they need to be specified very carefully.  It seems at least that the 
lazy callables techniques might be further exposed.


this is a lot to review, and I'll try to get to it, but since you're digging 
into internals anyway have you considered just creating your own AttributeImpl 
subclasses ?   You can then implement get()/set() and make it do whatever you'd 
like.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-08 Thread Kent Bower

 That is for comparing *Clauses* and handles:

locations.siteid = :param_1
and
:param_1 = locations.siteid

However,
locations.siteid = :param_1 AND locations.locationid = :param_2
and
locations.locationid = :param_1 AND locations.siteid = :param_2
are ClauseLists, which compares their individual Clauses *in order*:

def compare(self, other, **kw):
Compare this :class:`ClauseList` to the given 
:class:`ClauseList`,

including a comparison of all the clause items.


if not isinstance(other, ClauseList) and len(self.clauses) == 1:
return self.clauses[0].compare(other, **kw)
elif isinstance(other, ClauseList) and \
len(self.clauses) == len(other.clauses):
for i in range(0, len(self.clauses)):
if not self.clauses[i].compare(other.clauses[i], **kw):
return False
else:
return self.operator == other.operator
else:
return False


(I know because if I flip the order of the composite 
ForeignKeyConstraint it no longer calls .get() for the lazyload... it 
always queries the database, so I checked out why in the debugger)





On 9/8/2010 1:27 PM, Michael Bayer wrote:


On Sep 8, 2010, at 1:15 PM, Kent Bower wrote:


I imagine you are already aware of this...

Unfortunately, the clause comparison says these two clauses are 
different:


(Pdb) print self
locations.siteid = :param_1 AND locations.locationid = :param_2
(Pdb) print other
locations.locationid = :param_1 AND locations.siteid = :param_2

when they are really the equivalent.  So composite foreign keys need 
to be listed in the correct order for LazyLoader.use_get to optimize 
the load with get().


use clause.compare().   Here's the source for _BinaryExpression:

def compare(self, other, **kw):
Compare this :class:`_BinaryExpression` against the
given :class:`_BinaryExpression`.

return (
isinstance(other, _BinaryExpression) and
self.operator == other.operator and
(
self.left.compare(other.left, **kw) and
self.right.compare(other.right, **kw) or
(
operators.is_commutative(self.operator) and
self.left.compare(other.right, **kw) and
self.right.compare(other.left, **kw)
)
)
)


see the commutative in there ?  its handled.






I already saw it would be somewhat of an effort to refactor the 
clause comparison to work that out... bummer.





On 9/7/2010 7:28 PM, Michael Bayer wrote:


On Sep 7, 2010, at 6:41 PM, Kent Bower wrote:


Two items:

 * How does the orm currently determine whether it is safe to try 
get() (e.i. there are no funny join conditions)?  If you point me 
to the function where decision takes place, I can probably answer 
this myself


it compares the join condition of the relationship() to that of the 
clause which the Mapper uses when it issues get(), then stores that 
away as a flag for future consultation.   It's very unusual for a 
many-to-one relationship to be based on something other than a 
simple foreign-key-primary key relationship, though.





 * When I build up the primary key from the foreign key, is there 
an efficient way to build a composite key in the correct order to 
pass to get()?  (I thought maybe synchronize_pairs, but that 
maybe has to do with getting the direction consistent instead?)


Well if you aren't using any composite primary keys in many-to-ones, 
you wouldn't even have to worry about this.   Otherwise, the two 
collections to correlate would be property.local_remote_pairs and 
property.mapper.primary_key.   Perhaps make a dictionary out of 
dict([(r, l) for l, r in prop.local_remote_pairs]) and your PK value 
would be [getattr(instance, 
prop.parent.get_property_by_column(mydict[p]).key) for p in 
property.mapper.primary_key].


Or if you want to get ambitious you can just copy roughly whats in 
strategies.py on line 605 but then you're digging into 
internalsand looking at that now I'm wondering if 
strategy._equated_columns is really different than 
local_remote_pairs at all...






Thanks again, you've been much help!



On 9/7/2010 5:03 PM, Michael Bayer wrote:


On Sep 7, 2010, at 4:38 PM, Kent Bower wrote:

Don't want to strangle me, but when the orm (lazy)loads a 
MANYTONE object, it doesn't go to the database if the object is 
in the session.  Can I get with_parent() to behave this way, or 
would I need to specifically build up the primary key of the 
related object and call query.get()?


the latter.   You can use get() for all many to ones if you aren't 
using any funny join conditions.







On 9/7/2010 10:25 AM, Michael Bayer wrote:


On Sep 7, 2010, at 10:12 AM, Kent Bower wrote:

Mike, in your proof of concept, when __getstate__ detected 
transient, why did you need to make a copy of self.__dict__? 
self.__dict__

Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-08 Thread Kent Bower
 I've got a recipe for what will work well for us.  I imagine it could 
be useful for others, although I left out the actual serialization 
mechanism, since that will likely be very project specific.


I'd be happy to put this on the wiki, but if you wanted to look it over 
first, you are more than welcome (I'd prefer your feedback).  If you are 
busy, I can just post it and hope someone may find it useful.


Thank again for your help,
Kent


On 9/7/2010 7:28 PM, Michael Bayer wrote:


On Sep 7, 2010, at 6:41 PM, Kent Bower wrote:


Two items:

 * How does the orm currently determine whether it is safe to try 
get() (e.i. there are no funny join conditions)?  If you point me 
to the function where decision takes place, I can probably answer 
this myself


it compares the join condition of the relationship() to that of the 
clause which the Mapper uses when it issues get(), then stores that 
away as a flag for future consultation.   It's very unusual for a 
many-to-one relationship to be based on something other than a simple 
foreign-key-primary key relationship, though.





 * When I build up the primary key from the foreign key, is there an 
efficient way to build a composite key in the correct order to pass 
to get()?  (I thought maybe synchronize_pairs, but that maybe has 
to do with getting the direction consistent instead?)


Well if you aren't using any composite primary keys in many-to-ones, 
you wouldn't even have to worry about this.   Otherwise, the two 
collections to correlate would be property.local_remote_pairs and 
property.mapper.primary_key.   Perhaps make a dictionary out of 
dict([(r, l) for l, r in prop.local_remote_pairs]) and your PK value 
would be [getattr(instance, 
prop.parent.get_property_by_column(mydict[p]).key) for p in 
property.mapper.primary_key].


Or if you want to get ambitious you can just copy roughly whats in 
strategies.py on line 605 but then you're digging into 
internalsand looking at that now I'm wondering if 
strategy._equated_columns is really different than local_remote_pairs 
at all...






Thanks again, you've been much help!



On 9/7/2010 5:03 PM, Michael Bayer wrote:


On Sep 7, 2010, at 4:38 PM, Kent Bower wrote:

Don't want to strangle me, but when the orm (lazy)loads a MANYTONE 
object, it doesn't go to the database if the object is in the 
session.  Can I get with_parent() to behave this way, or would I 
need to specifically build up the primary key of the related object 
and call query.get()?


the latter.   You can use get() for all many to ones if you aren't 
using any funny join conditions.







On 9/7/2010 10:25 AM, Michael Bayer wrote:


On Sep 7, 2010, at 10:12 AM, Kent Bower wrote:

Mike, in your proof of concept, when __getstate__ detected 
transient, why did you need to make a copy of self.__dict__? 
self.__dict__.copy()


i was modifying the __dict__ from what would be expected in a 
non-serialized object, so that was to leave the original object 
being serialized unchanged.






On 9/6/2010 2:35 PM, Michael Bayer wrote:

On Sep 6, 2010, at 2:11 PM, Kent Bower wrote:


Also, I was hoping you would tell me whether this would be a candidate for 
subclassing InstrumentedAttribute?  Would that make more sense or providing custom 
__getstate__  __setstate__ ?

__getstate__ / __setstate__ are pretty much what I like to use for pickle 
stuff, unless some exotic situation makes me have to use __reduce__.   One 
problem with the recipe is that theres no 'deferred' loading of attributes.   
So in that sense playing with InstrumentedAttribute would give you a chance to 
put a callable in there that does what you want.

There is also the possibility that __setstate__ can load up callables into the 
instance_state using state.set_callable().   This is a callable that triggers 
when you access the attribute that is otherwise None.   There's a little bit of 
fanfare required to get that callable to assign to the attribute in the right 
way.   Attached is an example of that.   This is all a little more shaky since 
the state/callable API isn't really public.  Hasn't changed for awhile but 
there's no guarantee.








Thanks for your help, hopefully I'll be able to contribute such a recipe.

Kent




Since sqla won't load that for me in the case of transient, I need to load the 
relation manually (unless you feel like enhancing that as well).

its not an enhancement - it was a broken behavior that was specifically 
removed.   The transient object has no session, so therefore no SQL can be 
emitted - there's no context established.




Now I can manually emulate the obj being persistent with your changes for

On Sep 6, 2010, at 10:58 AM, Michael Bayermike...@zzzcomputing.com  wrote:


On Sep 6, 2010, at 9:06 AM, Kent wrote:


with_parent seems to add a join condition.

OK, so I guess you read the docs which is why you thought it joined and why you didn't 
realize it doesn't work for transient.  r20b6ce05f194 changes all that so

Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-07 Thread Kent Bower
 Mike, in your proof of concept, when __getstate__ detected transient, 
why did you need to make a copy of self.__dict__? self.__dict__.copy()




On 9/6/2010 2:35 PM, Michael Bayer wrote:

On Sep 6, 2010, at 2:11 PM, Kent Bower wrote:


Also, I was hoping you would tell me whether this would be a candidate for 
subclassing InstrumentedAttribute?  Would that make more sense or providing custom 
__getstate__  __setstate__ ?

__getstate__ / __setstate__ are pretty much what I like to use for pickle 
stuff, unless some exotic situation makes me have to use __reduce__.   One 
problem with the recipe is that theres no 'deferred' loading of attributes.   
So in that sense playing with InstrumentedAttribute would give you a chance to 
put a callable in there that does what you want.

There is also the possibility that __setstate__ can load up callables into the 
instance_state using state.set_callable().   This is a callable that triggers 
when you access the attribute that is otherwise None.   There's a little bit of 
fanfare required to get that callable to assign to the attribute in the right 
way.   Attached is an example of that.   This is all a little more shaky since 
the state/callable API isn't really public.  Hasn't changed for awhile but 
there's no guarantee.








Thanks for your help, hopefully I'll be able to contribute such a recipe.

Kent




Since sqla won't load that for me in the case of transient, I need to load the 
relation manually (unless you feel like enhancing that as well).

its not an enhancement - it was a broken behavior that was specifically 
removed.   The transient object has no session, so therefore no SQL can be 
emitted - there's no context established.




Now I can manually emulate the obj being persistent with your changes for

On Sep 6, 2010, at 10:58 AM, Michael Bayermike...@zzzcomputing.com  wrote:


On Sep 6, 2010, at 9:06 AM, Kent wrote:


with_parent seems to add a join condition.

OK, so I guess you read the docs which is why you thought it joined and why you didn't 
realize it doesn't work for transient.  r20b6ce05f194 changes all that so that 
with_parent() accepts transient objects and will do the look at the 
attributes thing.   The docs are updated as this method does use the lazy loader 
SQL mechanism, not a join.




Is there a way to get at
the query object that would be rendered from a lazy load (or what
subqueryload would render on the subsequent load), but on a
transient object, if i supply the session?

even though not recommended, can it make sqla believe my transient
object is detached by setting its state key?

There are reasons i do not want to add this to the session and
disabling autoflush would also cause problems.



On Sep 3, 9:58 am, Michael Bayermike...@zzzcomputing.com  wrote:

On Sep 3, 2010, at 9:36 AM, Kent wrote:


For the case of customerid = '7', that is a simple problem, but when
it is a more complex join condition, we only wanted to define this
condition in one single place in our application (namely, the orm).
That way, if or when that changes, developers don't need to search for
other places in the app that needed to manually duplicate the logic of
the orm join condition.
If I supplied the DBSession to sqla, it would know how to create the
proper Query object for this lazyload.  Can you point me in the right
direction (even if where you point me is not currently part of the
public API)?

Query has the with_parent() method for this use case.






Thanks again,
Kent
--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/sqlalchemy?hl=en.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email

Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-07 Thread Kent Bower
 Don't want to strangle me, but when the orm (lazy)loads a MANYTONE 
object, it doesn't go to the database if the object is in the session.  
Can I get with_parent() to behave this way, or would I need to 
specifically build up the primary key of the related object and call 
query.get()?




On 9/7/2010 10:25 AM, Michael Bayer wrote:


On Sep 7, 2010, at 10:12 AM, Kent Bower wrote:

Mike, in your proof of concept, when __getstate__ detected transient, 
why did you need to make a copy of self.__dict__? self.__dict__.copy()


i was modifying the __dict__ from what would be expected in a 
non-serialized object, so that was to leave the original object being 
serialized unchanged.






On 9/6/2010 2:35 PM, Michael Bayer wrote:

On Sep 6, 2010, at 2:11 PM, Kent Bower wrote:


Also, I was hoping you would tell me whether this would be a candidate for 
subclassing InstrumentedAttribute?  Would that make more sense or providing custom 
__getstate__  __setstate__ ?

__getstate__ / __setstate__ are pretty much what I like to use for pickle 
stuff, unless some exotic situation makes me have to use __reduce__.   One 
problem with the recipe is that theres no 'deferred' loading of attributes.   
So in that sense playing with InstrumentedAttribute would give you a chance to 
put a callable in there that does what you want.

There is also the possibility that __setstate__ can load up callables into the 
instance_state using state.set_callable().   This is a callable that triggers 
when you access the attribute that is otherwise None.   There's a little bit of 
fanfare required to get that callable to assign to the attribute in the right 
way.   Attached is an example of that.   This is all a little more shaky since 
the state/callable API isn't really public.  Hasn't changed for awhile but 
there's no guarantee.








Thanks for your help, hopefully I'll be able to contribute such a recipe.

Kent




Since sqla won't load that for me in the case of transient, I need to load the 
relation manually (unless you feel like enhancing that as well).

its not an enhancement - it was a broken behavior that was specifically 
removed.   The transient object has no session, so therefore no SQL can be 
emitted - there's no context established.




Now I can manually emulate the obj being persistent with your changes for

On Sep 6, 2010, at 10:58 AM, Michael Bayermike...@zzzcomputing.com  wrote:


On Sep 6, 2010, at 9:06 AM, Kent wrote:


with_parent seems to add a join condition.

OK, so I guess you read the docs which is why you thought it joined and why you didn't 
realize it doesn't work for transient.  r20b6ce05f194 changes all that so that 
with_parent() accepts transient objects and will do the look at the 
attributes thing.   The docs are updated as this method does use the lazy loader 
SQL mechanism, not a join.




Is there a way to get at
the query object that would be rendered from a lazy load (or what
subqueryload would render on the subsequent load), but on a
transient object, if i supply the session?

even though not recommended, can it make sqla believe my transient
object is detached by setting its state key?

There are reasons i do not want to add this to the session and
disabling autoflush would also cause problems.



On Sep 3, 9:58 am, Michael Bayermike...@zzzcomputing.com  wrote:

On Sep 3, 2010, at 9:36 AM, Kent wrote:


For the case of customerid = '7', that is a simple problem, but when
it is a more complex join condition, we only wanted to define this
condition in one single place in our application (namely, the orm).
That way, if or when that changes, developers don't need to search for
other places in the app that needed to manually duplicate the logic of
the orm join condition.
If I supplied the DBSession to sqla, it would know how to create the
proper Query object for this lazyload.  Can you point me in the right
direction (even if where you point me is not currently part of the
public API)?

Query has the with_parent() method for this use case.






Thanks again,
Kent
--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email tosqlalch...@googlegroups.com.
To unsubscribe from this group, send email 
tosqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this 
groupathttp://groups.google.com/group/sqlalchemy?hl=en.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email tosqlalch...@googlegroups.com.
To unsubscribe from this group, send email 
tosqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email tosqlalch...@googlegroups.com.
To unsubscribe from this group, send email 
tosqlalchemy+unsubscr...@googlegroups.com.
For more options, visit

Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-06 Thread Kent Bower
Fantastic, I will like to look into this change. 

Since you asked, consider a use case similar to this: we have a RESTfulish web 
service that accepts a serialized version of a transfer object which is 
passed to the server when a database save should take place.  In this case, an 
order with select relations are serialized and passed.

For a database save, this will be added to the session after it is cast into 
a sqlalchemy object. Nothing special there. 

Now for the use case:  the webservice needs to also be invoked while the user 
is still working on the order. For example, taxes and delivery charges are 
calculated by the server.  Again, the serialized version of the transfer object 
is sent to the server and cast into a sqlalchemy object. In this case, however, 
we have no intention on ever saving the object during this service request. 
Rather, the sqlalchemy object is transient. Still, to calculate taxes as an 
example there are a handful of relations that need to be loaded, such as 
zipcode objects, tax authorities, tax tables, etc. 

The reason for not wanting to disable autoflush is that this same code is 
(appropriately) invoked whether this object is persistent (from merge()) and 
part of the save web service or transient and part of the calculate web service 
(where the object is going to be thrown away).  In the case of being the save, 
it is important for database consistency thar autoflush remain enabled. 

In either case, if zipcodeid is the local side of fk join to load the zipcode 
relation, you would expect that a reference like:

orderobj.zipcode

would load the zipcode object since zipcodeid is populated, whether orderobj is 
transient or persistent. 

Since sqla won't load that for me in the case of transient, I need to load the 
relation manually (unless you feel like enhancing that as well). 

Now I can manually emulate the obj being persistent with your changes for 

On Sep 6, 2010, at 10:58 AM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 On Sep 6, 2010, at 9:06 AM, Kent wrote:
 
 with_parent seems to add a join condition.  
 
 OK, so I guess you read the docs which is why you thought it joined and why 
 you didn't realize it doesn't work for transient.  r20b6ce05f194 changes all 
 that so that with_parent() accepts transient objects and will do the look at 
 the attributes thing.   The docs are updated as this method does use the 
 lazy loader SQL mechanism, not a join.
 
 
 
 Is there a way to get at
 the query object that would be rendered from a lazy load (or what
 subqueryload would render on the subsequent load), but on a
 transient object, if i supply the session?
 
 even though not recommended, can it make sqla believe my transient
 object is detached by setting its state key?
 
 There are reasons i do not want to add this to the session and
 disabling autoflush would also cause problems.
 
 
 
 On Sep 3, 9:58 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Sep 3, 2010, at 9:36 AM, Kent wrote:
 
 For the case of customerid = '7', that is a simple problem, but when
 it is a more complex join condition, we only wanted to define this
 condition in one single place in our application (namely, the orm).
 That way, if or when that changes, developers don't need to search for
 other places in the app that needed to manually duplicate the logic of
 the orm join condition.
 
 If I supplied the DBSession to sqla, it would know how to create the
 proper Query object for this lazyload.  Can you point me in the right
 direction (even if where you point me is not currently part of the
 public API)?
 
 Query has the with_parent() method for this use case.  
 
 
 
 
 
 Thanks again,
 Kent
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more 

Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-06 Thread Kent Bower

On Sep 6, 2010, at 1:04 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 On Sep 6, 2010, at 12:01 PM, Kent Bower wrote:
 
 Fantastic, I will like to look into this change. 
 
 Since you asked, consider a use case similar to this: we have a RESTfulish 
 web service that accepts a serialized version of a transfer object which 
 is passed to the server when a database save should take place.  In this 
 case, an order with select relations are serialized and passed.
 
 For a database save, this will be added to the session after it is cast 
 into a sqlalchemy object. Nothing special there. 
 
 Now for the use case:  the webservice needs to also be invoked while the 
 user is still working on the order. For example, taxes and delivery charges 
 are calculated by the server.  Again, the serialized version of the transfer 
 object is sent to the server and cast into a sqlalchemy object. In this 
 case, however, we have no intention on ever saving the object during this 
 service request. Rather, the sqlalchemy object is transient. Still, to 
 calculate taxes as an example there are a handful of relations that need to 
 be loaded, such as zipcode objects, tax authorities, tax tables, etc. 
 
 So if it were me, I'd not be using HTTP in that way, i.e. the big serialized 
 bag of all kinds of stuff.I'd have made it such that a new session key 
 can be established with the web service which uses proper relational storage 
 for the pending state.   In fact in my current project I am doing just that, 
 where i have a base OrderData object that has two subclasses, 
 PendingOrder and Order, each of which are stored in distinct tables.  Its 
 not concrete inheritance either - OrderData is a declarative mixin that 
 uses @classproperty to establish the same relationship() objects on each 
 subclass.   However, I've certainly used HTTP sessions with disk state and 
 such in the past, and while I don't prefer heavy reliance on serialization 
 these days, I know that serialization patterns are very common.
 

Right, we considered a transient table as such. Since you mention it as your 
preference I'll put it back in the back of my mind, but I'm fairly pleased with 
how our framework is functioning, so I'd like to go this route. 


 
 The reason for not wanting to disable autoflush is that this same code is 
 (appropriately) invoked whether this object is persistent (from merge()) and 
 part of the save web service or transient and part of the calculate web 
 service (where the object is going to be thrown away).  In the case of being 
 the save, it is important for database consistency thar autoflush remain 
 enabled. 
 
 So you have this transient object, and you want to use it to load various 
 information about zipcodes and stuff, but its not database state.  *But*, you 
 *have* populated individual foreign key and maybe primary key attributes on 
 it, which most certainly represent persistent-centric concepts.So there 
 has been, at some point, some awareness of either this object's future 
 primary key, or something has loaded up related many-to-ones and figured out 
 their foreign keys and assigned them.   There's definitely no many-to-many 
 collections at play since those aren't possible without persistence of the 
 transient object's primary key information (unless you're working with 
 totally unconstrained tables, in which case, good luck).
 
 So the persistence information is already there.  If you are setting 
 order.foreign_key_of_something, why would you not instead set 
 order.something, so that order.something is already present in the transient 
 state?  The rule here being, how do i get foo.bar to be 'x'? answer: set 
 foo.bar = 'x' - simple right ?   The ORM would be left to do its normal job 
 of worrying about foreign keys.
 
 But instead, you're working in reverse.  The ORM has an opinion that if you 
 work in reverse like that, it won't block you, but it also isn't going to 
 make the guesses and assumptions that would be required for it to act the 
 same (see the FAQ entry about foo_id 7 for some rantage on this, you've 
 probably already seen it).
 
 I know the answer already to why you're populating 
 order.foreign_key_of_something rather than order.something.  You're trying to 
 reduce the serialization size, and/or the overhead of merging all that 
 serialized data back into a session.  

Bingo. Precisely. 


 So you're trying to rig the ORM into a custom, optimized serialization 
 scheme, a use case that is outside the scope of the very simple, single 
 purpose that relationship() is designed for out of the box, which is to 
 represent a linkage between classes and persist/restore a corresponding 
 linkage between related database rows (since if one side is transient, there 
 is only one database row in play).

Well when I started, I didn't know its designed use cases, but don't undersell 
it, it's pretty close to being able to handle this from what I can tell, at 
least

Re: [sqlalchemy] Re: Loading attributes for Transient objects

2010-09-06 Thread Kent Bower
 Actually, for my case, we are just serializing with json, we aren't 
pickling it...   We send the updated values back to the client as json 
text.  Later, the client will resend the final json serialized order 
when it requests a 'save'.   In other words, I don't need to worry about 
the session or pickling it... the server is stateless.


Would that make a difference to your approach?


On 9/6/2010 2:35 PM, Michael Bayer wrote:

On Sep 6, 2010, at 2:11 PM, Kent Bower wrote:


Also, I was hoping you would tell me whether this would be a candidate for 
subclassing InstrumentedAttribute?  Would that make more sense or providing custom 
__getstate__  __setstate__ ?

__getstate__ / __setstate__ are pretty much what I like to use for pickle 
stuff, unless some exotic situation makes me have to use __reduce__.   One 
problem with the recipe is that theres no 'deferred' loading of attributes.   
So in that sense playing with InstrumentedAttribute would give you a chance to 
put a callable in there that does what you want.

There is also the possibility that __setstate__ can load up callables into the 
instance_state using state.set_callable().   This is a callable that triggers 
when you access the attribute that is otherwise None.   There's a little bit of 
fanfare required to get that callable to assign to the attribute in the right 
way.   Attached is an example of that.   This is all a little more shaky since 
the state/callable API isn't really public.  Hasn't changed for awhile but 
there's no guarantee.








Thanks for your help, hopefully I'll be able to contribute such a recipe.

Kent




Since sqla won't load that for me in the case of transient, I need to load the 
relation manually (unless you feel like enhancing that as well).

its not an enhancement - it was a broken behavior that was specifically 
removed.   The transient object has no session, so therefore no SQL can be 
emitted - there's no context established.




Now I can manually emulate the obj being persistent with your changes for

On Sep 6, 2010, at 10:58 AM, Michael Bayermike...@zzzcomputing.com  wrote:


On Sep 6, 2010, at 9:06 AM, Kent wrote:


with_parent seems to add a join condition.

OK, so I guess you read the docs which is why you thought it joined and why you didn't 
realize it doesn't work for transient.  r20b6ce05f194 changes all that so that 
with_parent() accepts transient objects and will do the look at the 
attributes thing.   The docs are updated as this method does use the lazy loader 
SQL mechanism, not a join.




Is there a way to get at
the query object that would be rendered from a lazy load (or what
subqueryload would render on the subsequent load), but on a
transient object, if i supply the session?

even though not recommended, can it make sqla believe my transient
object is detached by setting its state key?

There are reasons i do not want to add this to the session and
disabling autoflush would also cause problems.



On Sep 3, 9:58 am, Michael Bayermike...@zzzcomputing.com  wrote:

On Sep 3, 2010, at 9:36 AM, Kent wrote:


For the case of customerid = '7', that is a simple problem, but when
it is a more complex join condition, we only wanted to define this
condition in one single place in our application (namely, the orm).
That way, if or when that changes, developers don't need to search for
other places in the app that needed to manually duplicate the logic of
the orm join condition.
If I supplied the DBSession to sqla, it would know how to create the
proper Query object for this lazyload.  Can you point me in the right
direction (even if where you point me is not currently part of the
public API)?

Query has the with_parent() method for this use case.






Thanks again,
Kent
--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/sqlalchemy?hl=en.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr

Re: [sqlalchemy] conditional joins/relationships

2010-08-20 Thread Kent Bower

 Now, thank you for the quick reply.


I might not be understanding what you need here, but assuming you want 
your API to look like this:


product.sales('2010-08-20')
  [Sale,Sale, ...]



Right, that is the idea.

First of all, is that enough?  If you just want to dynamically access 
an extra-filtered list of Sale objects then you could easily do that 
with either a function or a method on Product, etc.  I.e. it doesn't 
have to be provided by the ORM necessarily.


True, but I feel I am losing functionality associated with the orm to be 
able to reference that like the rest of the relation properties.  Maybe 
a larger reason for this is depending on the data, it may be very 
inefficient to bring back all the rows and filter them in python when 
the database could have easily done so for you.  Still, I appreciate the 
idea, and I'll kick it around a bit...




Or do you really need for this to be an ORM (relationship) property?  
If so then I probably am not the one with your answer because I've not 
done much of that, but perhaps you could use the lazy='dynamic' 
parameter to relationship() ?




Hmm... it looks like maybe this is the type of situation lazy='dynamic' 
is meant for.  Thanks for pointing me that way.




On Fri, 2010-08-20 at 11:58 -0700, Kent wrote:

I've run into this a variant of this same problem several times now,
so I want to ask if you know of a good way to solve the problem.

Some relation()s are based on extra criteria (besides primary key
joins), but otherwise would be nice to work just like normal relation
properties.

A simplified example:

   Say you have two entities: *Product and Sale*:

Product (productid, regularprice, description)

and

Sale (productid, startingdate, endingdate, saleprice)

I would like to be able to add a relationship() to the Product mapper
for a 'sales' collection, but be able to *pass in the date*, so that I
can populate a Product instance's 'sales' property, like prod.sales,
as a [list of Sale objects joined by primary key with filter of passed
date].

I believe I can get there with contains_eager(), but then the problem
is I need to redefine this query over each place I use it, instead of
being able to tell the *mapper* about it and treat it like a normal
relationship.

Is there an elegant way to accomplish this type of conditional join/
relation?

Thanks in advance,
Kent





--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SqlAlchemy logging FAQ

2010-08-18 Thread Kent Bower
 Ah. Then the problem is in turbogears (which creates a default .ini 
file with):



#echo shouldn't be used together with the logging module.
sqlalchemy.echo = false
sqlalchemy.echo_pool = false
sqlalchemy.pool_recycle = 3600

...logging sections...



Instead, its default .ini file should leave the flag alone.

I've directed TG group to this thread, thanks for your help.


On 8/18/2010 10:58 AM, Michael Bayer wrote:

On Aug 17, 2010, at 11:45 AM, Kent wrote:


The logging FAQ states Therefore, when using Python logging, ensure
all echo flags are set to False at all times, to avoid getting
duplicate log lines.
http://www.sqlalchemy.org/docs/dbengine.html#configuring-logging

Is this no longer correct information?

it is correct



I am using turbogears (which creates the engine with
engine_from_config()) and with the config file like this:

sqlalchemy.echo = false
sqlalchemy.echo_pool = false
sqlalchemy.pool_recycle = 3600

I cannot get engine logging no matter what I put in the logging config
file.  echo is turned off and sqlalchemy won't even hit the
logger.debug() method.

assuming you're on the pylons version of turbogears, you leave the echo flags 
alone.  You configure logging in the [logging] section of your .ini file, using 
loggers and log levels.   I use Pylons and this all works very well.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Oracle 8i supports RETURNING clause (but sqla running 8i doesn't)

2010-08-16 Thread Kent Bower

 I did find a reference for oracle 8.0 that supports returning clause...

I've moved this to a ticket request in trac:
http://www.sqlalchemy.org/trac/ticket/1878

On 8/14/2010 11:14 AM, Kent Bower wrote:
Not a myth, I'm using them (via sqla).  Simple views (just one table) 
oracle figures out writes to all alone.  For more complex views (joins 
of tables), you can use an INSTEAD OF trigger to write to the view 
(just pl/sql trigger where you tell Oracle what you want to happen 
when someone inserts/updates/deletes data in your view)... also using 
them, not a myth.


Anyway, here is documentation from oracle.com, so at least we know 
8.1.7 supported it:
http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a85397/state11c.htm#2066261 



(I think they added fancier bulk returns and the like in later 
versions...)


Kent


On 8/14/2010 10:42 AM, Michael Bayer wrote:

On Aug 14, 2010, at 9:34 AM, Kent wrote:


I'm connecting to a legacy database, some tables that I need to map in
sqla have no primary key.  I've created views for such tables that map
the ROWID as the primary key.  SQLAlchemy does a RETURNING clause when
I am connected to a more recent Oracle database, but not when
connected to 8i.

I've pasted the exact same code sqla compiled for newer oracle into an
8i session and it works fine:

{{{
Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production

SQL  show release
release 801070400
}}}
{{{
...(set up variables in sqlplus)...
SQL  INSERT INTO artransarchive (customerid, companyid, orderid,
adjinvoice, transactioncodeid, status, paymenttypeid, postdate,
amount, sourcesiteid, artype) VALUES
(:customerid, :companyid, :orderid, :adjinvoice, :transactioncodeid, 
:status, :paymenttypeid, :postdate, :amount, :sourcesiteid, :artype)

RETURNING artransarchive.artransarchiveid INTO :ret_0
  2  ;

1 row created.

SQL  print ret_0

RET_0
-
zbAADAAARwDAAZ

SQL  show release
release 801070400
SQL
}}}

Are there other problems that restricted the use of RETURNING with
oracle 8i, or was it believed to not be supported?  (Note that I don't
believe Oracle 8.0 supports it... I read it was implemented in 8i)
I probably went off of 
this:http://www.lattimore.id.au/2006/04/06/oracle-returning-clause/  
which says its as of 10g.If we can get confirmation somewhere of 
when RETURNING was introduced we can lower the 
auto-returning-on-insert threshhold; though its not clear that its 
really a better method to use with oracle, so perhaps here since you 
have some special use we would allow it to be enabled manually on any 
version.


though if you're mapping to a view, these are writeable views ?   i 
thought those were only in myth.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Oracle 8i supports RETURNING clause (but sqla running 8i doesn't)

2010-08-14 Thread Kent Bower
Not a myth, I'm using them (via sqla).  Simple views (just one table) 
oracle figures out writes to all alone.  For more complex views (joins 
of tables), you can use an INSTEAD OF trigger to write to the view 
(just pl/sql trigger where you tell Oracle what you want to happen when 
someone inserts/updates/deletes data in your view)... also using them, 
not a myth.


Anyway, here is documentation from oracle.com, so at least we know 8.1.7 
supported it:
http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a85397/state11c.htm#2066261 



(I think they added fancier bulk returns and the like in later versions...)

Kent


On 8/14/2010 10:42 AM, Michael Bayer wrote:

On Aug 14, 2010, at 9:34 AM, Kent wrote:

   

I'm connecting to a legacy database, some tables that I need to map in
sqla have no primary key.  I've created views for such tables that map
the ROWID as the primary key.  SQLAlchemy does a RETURNING clause when
I am connected to a more recent Oracle database, but not when
connected to 8i.

I've pasted the exact same code sqla compiled for newer oracle into an
8i session and it works fine:

{{{
Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production

SQL  show release
release 801070400
}}}
{{{
...(set up variables in sqlplus)...
SQL  INSERT INTO artransarchive (customerid, companyid, orderid,
adjinvoice, transactioncodeid, status, paymenttypeid, postdate,
amount, sourcesiteid, artype) VALUES
(:customerid, :companyid, :orderid, :adjinvoice, :transactioncodeid, :status, 
:paymenttypeid, :postdate, :amount, :sourcesiteid, :artype)
RETURNING artransarchive.artransarchiveid INTO :ret_0
  2  ;

1 row created.

SQL  print ret_0

RET_0
-
zbAADAAARwDAAZ

SQL  show release
release 801070400
SQL
}}}

Are there other problems that restricted the use of RETURNING with
oracle 8i, or was it believed to not be supported?  (Note that I don't
believe Oracle 8.0 supports it... I read it was implemented in 8i)
 

I probably went off of 
this:http://www.lattimore.id.au/2006/04/06/oracle-returning-clause/  which says its as of 
10g.If we can get confirmation somewhere of when RETURNING was introduced we can 
lower the auto-returning-on-insert threshhold; though its not clear that its 
really a better method to use with oracle, so perhaps here since you have some special 
use we would allow it to be enabled manually on any version.

though if you're mapping to a view, these are writeable views ?   i thought 
those were only in myth.

   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Merge support with ConcreteInheritedProperty

2010-08-02 Thread Kent Bower

Excellent.  The 'pass' ConcreteInheritedProperty.merge() method works fine.

Thanks again.


On 8/1/2010 2:24 PM, Michael Bayer wrote:

On Jul 31, 2010, at 7:41 AM, Kent wrote:

   

When I call merge() on an ArTran object, the merge() method of a
ConcreteInheritedProperty 'artransarchiveid'
that *exists only on ArTranArchive* is being called.
 

The attribute artransarchiveid also exists on your base mapper, since the base mapper is mapped 
to a polymorphic union of all the subclasses.   The ConcreteInheritedProperty is placed on 
subclasses which don't have any way to map this attribute, i.e. all your subclasses that are on tables which 
don't have that column.   This is one of the awkwardnesses to the current system of a concrete map with no 
real superclass.

If the polymorphic union were generated only against columns that were common 
to all subclasses, you wouldn't have such attributes in the end result, but 
then each subclass would invoke a second SELECT to load its remaining 
attributes (I just tried this to confirm it is the case).If the mapper knew 
that only a subset of columns are common to subclasses, and that another bunch 
of attributes were only for loading, that could make this cleaner still by not 
mapping those extra attributes on the base, but it hasn't been worked out how 
that would function or be configured.

   

(
As an aside, instead of getting a 'NotImplementedError' when
ConcreteInheritedProperty.merge() is called, I am getting TypeError:
merge() takes exactly 6 arguments (8 given)
 

anyway a do-nothing merge() has been added to ConcreteInheritedProperty in 
r3b1895a3b736 which allows merge() to work with a concrete mapping.

   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Polymorphic union of two sibling classes (no real inheritance)

2010-07-29 Thread Kent Bower

No, in fact, there is no ArTranBase table at all.

If I remove concrete inheritance, how do I issue a UNION of the two 
tables and have the objects polymorphically loaded?




On 7/29/2010 4:18 PM, Michael Bayer wrote:

On Jul 29, 2010, at 2:31 PM, Kent wrote:

   

I'm getting a messy error that could be a bug, but is very likely
related to my setup of a set of 2 polymorphic classes I am attempting
to map.

One entity is a transaction and the other is a transaction_archive
record.  The table structure is therefore very similar for both tables
and it seems to fit Concrete Table Inheritance, except there is no
'parent' entity.  Rather, they are sister tables.

What I have mostly works until I get into loading this union as a
relation to another table... then I'm having problems.

I couldn't clearly see the correct way to set up this when there is
no real inheritance, but rather sister entities.

Can you suggest how to correctly map these 2 tables?
 

it looks fine to me except you're asking for eager loading, and if you're 
querying from the ArTranBase you'd need to specify relationship() at that level 
(as well as on each child).  Example at 
http://www.sqlalchemy.org/docs/mappers.html#using-relationships-with-inheritance
 .

OTOH if you are not querying from ArTranBase, remove the usage of concrete 
inheritance altogether.

   


artran_union = polymorphic_union({
'artran': artrans_table,
'archive': artransarchive_table
}, 'type', 'artran_union')

artranbase_mapper = mapper(ArTranBase, artran_union,
with_polymorphic=('*', artran_union),
polymorphic_on=artran_union.c.type,
polymorphic_identity='ignored')


#  ArTran
--- #
mapper(ArTran, artrans_table, inherits=artranbase_mapper,
concrete=True, polymorphic_identity='artran',
properties={'trancode': relation(TranCode,
cascade='refresh-expire,expunge', lazy=False),
'paymenttype': relation(PaymentType,
cascade='refresh-expire,expunge', lazy=False)}
)


#  ArTranArchive
--- #
mapper(ArTranArchive, artransarchive_table,
inherits=artranbase_mapper,
concrete=True, polymorphic_identity='archive',
properties={'trancode': relation(TranCode,
cascade='refresh-expire,expunge', lazy=False),
'paymenttype': relation(PaymentType,
cascade='refresh-expire,expunge', lazy=False)}
)

Thanks in advance.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

 
   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Polymorphic union of two sibling classes (no real inheritance)

2010-07-29 Thread Kent Bower

Right.  I understand.  Thanks for pointing that out, you are correct.

My bigger concern was getting the ArTranBase mapper correct.  Apparently 
there is no need in this case to specify with_polymorphic= in the 
mapper.  Did I miss documentation on using 'polymorphic_union' without 
with_polymorphic=?  That seems to be working, I was just looking for 
confirmation that this is a supported use-case.




On 7/29/2010 4:51 PM, Michael Bayer wrote:

What I meant was, if you want to say session.query(ArTranBase), which it 
appears that you do, then you are querying against ArTranBase.

Since it seems like you want the polymorphic_union here, when you query ArTranBase and you want it 
to eagerly load trancode and paymenttype, it would need to have a 
relation() on the ArTranBase mapper so that it knows what to join.




On Jul 29, 2010, at 4:46 PM, Kent wrote:

   

This seems to work, but I didn't find examples of this.  Does this
look correct (assuming there is no parent table in the database and
all I really want is 2 'normal' mappers and a 3rd that performs a
polymorphoric_union)?

==

artran_union = polymorphic_union({
'artran': artrans_table,
'archive': artransarchive_table
}, 'type', 'artran_union')

artranbase_mapper = mapper(ArTranBase, artran_union,
polymorphic_on=artran_union.c.type)


#  ArTran
--- #
mapper(ArTran, artrans_table, inherits=artranbase_mapper,
concrete=True, polymorphic_identity='artran',
properties={'trancode': relation(TranCode,
cascade='refresh-expire,expunge', lazy=False),
'paymenttype': relation(PaymentType,
cascade='refresh-expire,expunge', lazy=False)}
)


#  ArTranArchive
--- #
mapper(ArTranArchive, artransarchive_table,
inherits=artranbase_mapper,
concrete=True, polymorphic_identity='archive',
properties={'trancode': relation(TranCode,
cascade='refresh-expire,expunge', lazy=False),
'paymenttype': relation(PaymentType,
cascade='refresh-expire,expunge', lazy=False)}
)



On Jul 29, 4:20 pm, Kent Bowerk...@retailarchitects.com  wrote:
 

No, in fact, there is no ArTranBase table at all.

If I remove concrete inheritance, how do I issue a UNION of the two
tables and have the objects polymorphically loaded?

On 7/29/2010 4:18 PM, Michael Bayer wrote:

   

On Jul 29, 2010, at 2:31 PM, Kent wrote:
 
   

I'm getting a messy error that could be a bug, but is very likely
related to my setup of a set of 2 polymorphic classes I am attempting
to map.
   
   

One entity is a transaction and the other is a transaction_archive
record.  The table structure is therefore very similar for both tables
and it seems to fit Concrete Table Inheritance, except there is no
'parent' entity.  Rather, they are sister tables.
   
   

What I have mostly works until I get into loading this union as a
relation to another table... then I'm having problems.
   
   

I couldn't clearly see the correct way to set up this when there is
no real inheritance, but rather sister entities.
   
   

Can you suggest how to correctly map these 2 tables?
   
   

it looks fine to me except you're asking for eager loading, and if you're 
querying from the ArTranBase you'd need to specify relationship() at that level 
(as well as on each child).  Example 
athttp://www.sqlalchemy.org/docs/mappers.html#using-relationships-with-
 
   

OTOH if you are not querying from ArTranBase, remove the usage of concrete 
inheritance altogether.
 
   


artran_union = polymorphic_union({
 'artran': artrans_table,
 'archive': artransarchive_table
 }, 'type', 'artran_union')
   
   

artranbase_mapper = mapper(ArTranBase, artran_union,
with_polymorphic=('*', artran_union),
 polymorphic_on=artran_union.c.type,
polymorphic_identity='ignored')
   
   

#  ArTran
--- #
mapper(ArTran, artrans_table, inherits=artranbase_mapper,
 concrete=True, polymorphic_identity='artran',
 properties={'trancode': relation(TranCode,
 cascade='refresh-expire,expunge', lazy=False),
 'paymenttype': relation(PaymentType,
 cascade='refresh-expire,expunge', lazy=False)}
 )
   
   

#  ArTranArchive
--- #
mapper(ArTranArchive, artransarchive_table,
inherits=artranbase_mapper,
 concrete=True, polymorphic_identity='archive',
 properties={'trancode': relation(TranCode,
 cascade='refresh-expire,expunge', lazy=False),
 

Re: [sqlalchemy] EXISTS statements with any(), but with a join

2010-06-04 Thread Kent Bower

Nice.  That might come in very useful, thanks.

However, I can't quite get the second approach to work:

exq=DBSession.query(Inventory).join(Location).filter(Location.siteid=='03').correlate(Product).subquery()
DBSession.query(Product).filter(exists(exq)).all()

Traceback (most recent call last):
  File console, line 1, in module
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6-linux-x86_64.egg/sqlalchemy/sql/expression.py, 
line 512, in exists

return _Exists(*args, **kwargs)
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6-linux-x86_64.egg/sqlalchemy/sql/expression.py, 
line 2806, in __init__

s = select(*args, **kwargs).as_scalar().self_group()
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6-linux-x86_64.egg/sqlalchemy/sql/expression.py, 
line 237, in select
return Select(columns, whereclause=whereclause, from_obj=from_obj, 
**kwargs)
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6-linux-x86_64.egg/sqlalchemy/sql/expression.py, 
line 3637, in __init__

raise exc.ArgumentError(columns argument to select() must 
ArgumentError: columns argument to select() must be a Python list or 
other iterable




print exq
SELECT inventory.productid, inventory.siteid, inventory.locationid, 
inventory.receiptdate, inventory.qty, inventory.lastmovedate
FROM inventory JOIN locations ON locations.siteid = inventory.siteid AND 
locations.locationid = inventory.locationid

WHERE locations.siteid = :siteid_1

Any idea what I am doing wrong?


On 6/4/2010 4:37 PM, Conor wrote:

On 06/04/2010 03:13 PM, Kent wrote:

Say I track Inventory with three classes: Product, Inventory, Location

This any() expression yields the following output.

session.query(Product).filter(Product.inventory.any(Location.siteid==u'EAST')).all()

SELECT ...
FROM products
WHERE EXISTS (SELECT 1
FROM inventory, locations
WHERE products.productid = inventory.productid AND locations.siteid = %
(siteid_1)s)

What if I really need the inventory and locations tables to by JOINed.

SELECT ...
FROM products
WHERE EXISTS (SELECT 1
FROM inventory JOIN locations ON sqla_magic_clause
JOIN
WHERE products.productid = inventory.productid AND locations.siteid = %
(siteid_1)s)

Can I get to this with the any() expression?

   


The quickest way is to add another any() clause. Assuming your 
Inventory-Location relation is many-to-one or one-to-one (meaning you 
would use has() instead of any()), you can use this query:


session.query(Product).filter(
 Product.inventory.any(
 Inventory.location.has(Location.siteid==u'EAST'))).all()
   

If you don't like nesting another EXISTS clause in your SQL, you can 
create the inner query manually:


subq = session.query(Inventory)
subq = subq.join(Inventory.location)
subq = subq.filter(Inventory.productid == Product.productid)
subq = subq.filter(Location.siteid == u'EAST')
subq = subq.correlate(Product) # Probably not needed.
subq = subq.subquery()

session.query(Product).filter(exists(subq)).all()
   


-Conor

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: For each begin_nested() call, a corresponding rollback() or commit() must be issued.

2010-05-31 Thread Kent Bower

On 5/30/2010 1:24 PM, Michael Bayer wrote:

On May 28, 2010, at 1:46 PM, Kent Bower wrote:

   

On 5/28/2010 10:08 AM, Michael Bayer wrote:
 

Is the pattern that you want to keep re-issuing a savepoint repeatedly using 
the same name ? Does that have some different usage of resources versus 
issuing/closing distinct savepoints with different names ?

   

As an aside, since oracle apparently has no 'release savepoint', I imagine the answer to 
your second question might be: yes, when using oracle with thousands of potential 
savepoints.I think I've worked around this for now, so I'm not asking you to 
necessarily do anything about it, but it might be somewhat of a minor enhancement at some 
point.
Thanks again.
 


just so we're on the same page, oracle has no release, so if we are doing this:

begin_nested()
commit()
begin_nested()
commit()
begin_nested()
commit()


the ora conversation is

SAVEPOINT x
SAVEPOIINT y
SAVEPOINT z

i.e. cheaper to reuse the same savepoint since we aren't rolling back to y or 
x.

in engine/base.py, seems like we would add logic to the __savepoint_seq 
counter to achieve this.
   
I think that is what I was driving at.  May/should be better with oracle 
(though I admit to not knowing how expensive a savepoint is, certainly 
if you've got thousands outstanding it would have been better to reissue 
the same one under the circumstances that it had already been 'released' 
with commit()).


Although sqla doesn't allow the user to specify the savepoint name, the 
same could be accomplished given if support for the following were 
implemented:


Let me ask:

sp_a=begin_nested()
...
...
sp_b=begin_nested()
...
...
sp_c=begin_nested()
...
... #realize I need to go back to sp_a...
sp_a.rollback()

My understanding from the docs, is this is no good currently.  I need to 
first do commit(), commit() and then rollback() or three rollback()s, 
correct?


Again, I'm not requesting an enhancement, but would make the point that 
it would be more *intuitive* for sqla to figure that out for you, so you 
could arbitrarily say sp_a.rollback() and it knows to release or 
rollback all the nested transactions living under sp_a.


Kent





--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: For each begin_nested() call, a corresponding rollback() or commit() must be issued.

2010-05-31 Thread Kent Bower



On 5/31/2010 9:55 AM, Michael Bayer wrote:

On May 31, 2010, at 8:24 AM, Kent Bower wrote:

   

Although sqla doesn't allow the user to specify the savepoint name, the same 
could be accomplished given if support for the following were implemented:

Let me ask:

sp_a=begin_nested()
...
...
sp_b=begin_nested()
...
...
sp_c=begin_nested()
...
... #realize I need to go back to sp_a...
sp_a.rollback()

My understanding from the docs, is this is no good currently.  I need to first 
do commit(), commit() and then rollback() or three rollback()s, correct?

Again, I'm not requesting an enhancement, but would make the point that it 
would be more *intuitive* for sqla to figure that out for you, so you could 
arbitrarily say sp_a.rollback() and it knows to release or rollback all the 
nested transactions living under sp_a.
 


that's how the engine-level API works - you get at a Transaction object that 
you can roll back anywhere in the chain (its up to you to know that the other 
Transaction objects in the middle are no longer valid).In the ORM we wanted 
to keep things more simplistic.   I would note that begin() and begin_nested() 
do return the SessionTransaction object though, which does itself include 
commit() and rollback() methods.   You might want to try the above pattern with 
it, as it might accomplish this already ( I didn't write the SessionTransaction 
accounting logic).

   
Ok, good news, that is already taken care of properly, as you 
suspected.  Both the database rollback to the correct savepoint is 
issued and also the session data contains the correct data from before sp_a.


This statement in the docs is what lead me to believe this is not 
implemented:
For each begin_nested() call, a corresponding rollback() or commit() 
must be issued.  (http://www.sqlalchemy.org/docs/session.html)
That is misleading, I guess, maybe there is a better way to state that 
requirement?  (I didn't need to issue a rollback or commit() for each 
begin_nested(), I am allowed to skip back to a previous one, very nice!)


Thanks

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: For each begin_nested() call, a corresponding rollback() or commit() must be issued.

2010-05-28 Thread Kent Bower



* Is there a way besides session.commit() to free the savepoint
resource?  Is there a way to provide the savepoint name, so I can use
the same name over?  Lastly, if you aren't the expert, where would you
point me, zope group or TG group?
 

you could issue the SAVEPOINT instructions manually if you don't like SQLA's 
abstraction of it, although then you don't get the Session's object management 
behavior around those savepoints (though it can be approximated).

   

Yeah, I've considered it.

I don't really understand what you're trying to achieve, such that SQLA is 
forcing you to create thousands of savepoints when you only need one. Is 
the pattern that you want to keep re-issuing a savepoint repeatedly using the same name ? 
  Does that have some different usage of resources versus issuing/closing distinct 
savepoints with different names ?
   
I'm fine with how SQLA is designed, it isn't really a SQLA issue, I was 
just appealing to you to see if you could think of a workaround   I 
believe the problem is in the framework tools we are using, whether it 
is Zope or TG.  (I've posted to zope group now to see if they intended 
to support savepoints and how, etc.).
Since that framework won't allow me to issue the command 
session.commit(), I cannot release the savepoints until the zope 
transaction commits, and by then I'm getting a python max recursion 
problem because there are so many outstanding savepoints for it to release.


So where I thought you could help is if there is another (even 
non-public) way to release the savepoint without calling session.commit()?


Ultimately I understand now that this is not a SQLA issue.

Thanks

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: For each begin_nested() call, a corresponding rollback() or commit() must be issued.

2010-05-28 Thread Kent Bower


 From a quick reading of the 'transaction' package source, it looks like
you should be able to create savepoints and roll them back something
like this:

savepoint = transaction.savepoint()
try:
# ...
except:
savepoint.rollback()
raise

   


Thanks for the interest in helping.  I had already gone down that road 
and abandoned it when I realized it wasn't playing nicely with 
sqlalchemy session:


 a=TranCode()
 a.transactioncodeid='PMT'
 sp=transaction.savepoint()
 DBSession.add(a)
 DBSession.flush()
15:53:52,852 INFO  [sqlalchemy.engine.base.Engine.0x...88d0] SAVEPOINT 
sa_savepoint_1

15:53:52,853 INFO  [sqlalchemy.engine.base.Engine.0x...88d0] {}
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.0.2-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py:500: 
SAWarning: Unicode type received non-unicode bind param value 'PMT'

  param[key.encode(encoding)] = processors[key](compiled_params[key])
15:53:52,855 INFO  [sqlalchemy.engine.base.Engine.0x...88d0] INSERT INTO 
trancodes (transactioncodeid, description, debitorcredit, posttogl, 
userdefined, codetype) VALUES (:transactioncodeid, :description, 
:debitorcredit, :posttogl, :userdefined, :codetype)
15:53:52,855 INFO  [sqlalchemy.engine.base.Engine.0x...88d0] 
{'description': None, 'userdefined': None, 'transactioncodeid': 'PMT', 
'codetype': None, 'debitorcredit': None, 'posttogl': None}

after flush!!
 sp.rollback()
15:53:52,859 INFO  [sqlalchemy.engine.base.Engine.0x...88d0] ROLLBACK TO 
SAVEPOINT sa_savepoint_1

15:53:52,860 INFO  [sqlalchemy.engine.base.Engine.0x...88d0] {}
Traceback (most recent call last):
  File console, line 1, in module
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/transaction-1.0a1-py2.6.egg/transaction/_transaction.py, 
line 662, in rollback

transaction._saveAndRaiseCommitishError() # reraises!
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/transaction-1.0a1-py2.6.egg/transaction/_transaction.py, 
line 659, in rollback

savepoint.rollback()
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/zope.sqlalchemy-0.4-py2.6.egg/zope/sqlalchemy/datamanager.py, 
line 147, in rollback
self.session.clear() # remove when Session.rollback does an 
attribute_manager.rollback

AttributeError: 'Session' object has no attribute 'clear'


Apparently I need to look into whether zope has a SQLA 0.6.0 compatible release 
yet...looks to be trying to call session.clear


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: For each begin_nested() call, a corresponding rollback() or commit() must be issued.

2010-05-28 Thread Kent Bower

On 5/28/2010 10:08 AM, Michael Bayer wrote:
Is the pattern that you want to keep re-issuing a savepoint repeatedly 
using the same name ? Does that have some different usage of resources 
versus issuing/closing distinct savepoints with different names ?


As an aside, since oracle apparently has no 'release savepoint', I 
imagine the answer to your second question might be: yes, when using 
oracle with thousands of potential savepoints.I think I've worked 
around this for now, so I'm not asking you to necessarily do anything 
about it, but it might be somewhat of a minor enhancement at some point.

Thanks again.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] For each begin_nested() call, a corresponding rollback() or commit() must be issued.

2010-05-27 Thread Kent Bower

Thank you, as always.

I failed to recognize I'm using the TurboGears foundation, which uses 
zope  transaction:


Is there a way to still accomplish this?

 DBSession.begin_nested()
sqlalchemy.orm.session.SessionTransaction object at 0xe9d5150

 DBSession.commit()
Traceback (most recent call last):
  File console, line 1, in module
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.0.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/scoping.py, 
line 129, in do

return getattr(self.registry(), name)(*args, **kwargs)
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.0.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py, 
line 655, in commit

self.transaction.commit()
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.0.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py, 
line 368, in commit

self._prepare_impl()
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.0.2-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py, 
line 344, in _prepare_impl

ext.before_commit(self.session)
  File 
/home/rarch/tg2env/lib/python2.6/site-packages/zope.sqlalchemy-0.4-py2.6.egg/zope/sqlalchemy/datamanager.py, 
line 201, in before_commit
assert zope_transaction.get().status == 'Committing', Transaction 
must be committed using the transaction manager

AssertionError: Transaction must be committed using the transaction manager



On 5/27/2010 6:39 PM, Michael Bayer wrote:

On May 27, 2010, at 5:12 PM, Kent wrote:

   

The docs state For each begin_nested() call, a corresponding
rollback() or commit() must be issued.

In PostgreSql, according to my understanding, if there is ever a
database exception, a rollback must be issued.
This means a main reason to issue a SAVEPOINT is as a hedge against an
error.

As database transactions go, I want this entire thing to be a single
transaction, so now I don't know how to continue...

For example,

DBSession.begin_nested()  #savepoint
DBSession.add(obj)
try:
DBSession.flush()
except IntegrityError:
DBSession.rollback()
else:
# now what?  I do not want to commit, i have much
# more work to do than this which should be part of
# this transaction, but if I don't commit now,
# i need to issue 2 commits later()??

Is releasing the savepoint a choice instead of rolling() or commit()?
 

commit() releases the savepoint, if thats whats going on contextually.   It 
doesnt actually commit the outer transaction if you've last called 
begin_nested().

your block should be like:

session.begin_nested()
try:
 ...
 session.flush()
 session.commit()
except:
session.rollback()


   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: sqla 0.6.0 with oracle 8 (ORA-00907: missing right parenthesis)

2010-05-18 Thread Kent Bower

No, this is what was causing ORA-12704: character set mismatch:

SQL SELECT CAST('test unicode returns' AS NVARCHAR2(60)) AS anon_1 FROM 
DUAL;

SELECT CAST('test unicode returns' AS NVARCHAR2(60)) AS anon_1 FROM DUAL
*
ERROR at line 1:
ORA-12704: character set mismatch







On 5/18/2010 1:17 PM, Michael Bayer wrote:

Sorry, I meant NVARCHAR2.

SELECT CAST('test unicode returns' AS NVARCHAR2(60)) AS anon_1 FROM DUAL

works ?  or no ?



On May 18, 2010, at 12:17 PM, Kent wrote:

   

Oops, my bad, I didn't notice you that statement changed NVARCHAR2 to
NVARCHAR:

Here is that result:
=
SQL  SELECT CAST('test unicode returns' AS NVARCHAR(60)) AS anon_1
FROM DUAL
  2  ;
SELECT CAST('test unicode returns' AS NVARCHAR(60)) AS anon_1 FROM
DUAL
  *
ERROR at line 1:
ORA-00907: missing right parenthesis
=




On May 18, 12:15 pm, Kentk...@retailarchitects.com  wrote:
 

SELECT CAST('test unicode returns' AS NVARCHAR(60)) AS anon_1 FROM DUAL
 
   

then it seems like either that fails directly, or it fails when the client gets 
a hold of it.   See what happens with that statement.   Not sure that Oracle 8 
has NVARCHAR which might be the issue.
 

Please see the beginning of my previous post, that is the statement
creating the ORA-12704: character set mismatch

So, you intend to change exc.DBAPIError to Exception? (That is
actually exactly what I had just done as a workaround until I heard
back from you)

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/sqlalchemy?hl=en.
   

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

 
   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: is sqlalchemy-migrate the right way to go?

2010-05-17 Thread Kent Bower

Thanks, that looks like its conceptually what we are hoping for, at least.

On 5/17/2010 3:58 PM, Tamás Bajusz wrote:

On Mon, May 17, 2010 at 9:49 PM, Kentk...@retailarchitects.com  wrote:
   

Ideally, I agree.  Practically speaking, though, we came from a
company where dozens and dozens of developers worked on the system and
it was structured exactly this way (a master file and a series of
incremental upgrade scripts).  It was always getting messed up between
the two sets of schema definitions until eventually we developed a
schema comparison tool and all those problems seemed to vanish.

I'm obviously not saying SQLAlchemy needs to provide this, but just
trying to make a case for its usefulness.  Thanks for your input.
 

I'v never tried it, but maybe miruku is what you are looking for:
http://bitbucket.org/gjhiggins/miruku/wiki/Home

   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] allow_partial_pks=False goes to database on partial pk

2010-05-11 Thread Kent Bower

http://www.sqlalchemy.org/trac/ticket/1797


On 5/11/2010 9:50 AM, Michael Bayer wrote:

just put up a new ticket, i dont really have time to attend to these right now. 
 you're right we'd probably move the partial pk's check to the ultimate call 
that un-expires things.


On May 10, 2010, at 5:21 PM, Kent wrote:

   

See this script, running 0.6.0:

==
from sqlalchemy import *
from sqlalchemy.orm import *

engine = create_engine('oracle://user:passw...@localhost:1521/xe?
use_ansi=False',echo=True)
metadata = MetaData()
Session = sessionmaker(bind=engine)
session = Session()

order_table = Table(orders, metadata,
Column(orderid, Unicode, primary_key=True)
)

orderdetail_table = Table(orderdetails,metadata,
Column(orderid, Unicode, ForeignKey('orders.orderid'),
primary_key=True),
Column(lineid, Integer, primary_key=True),
Column(saleprice, Numeric, nullable=False)
)

class Order(object):
pass

class OrderDetail(object):
pass


order_mapper = mapper(Order, order_table,
properties=dict(orderdetails=relation(OrderDetail,
cascade='all,delete-orphan',
single_parent=True,
lazy=False,
backref=backref('parentorder',
cascade='refresh-expire,expunge'

orderdetail_mapper = mapper(OrderDetail, orderdetail_table,
allow_partial_pks=False)


o=Order()
o.orderid = u'SALE000' # not in database

line=OrderDetail()
line.lineid = 1 # not in database

o.orderdetails = [line]
merged=session.merge(o)

merged.orderdetails[0].saleprice  # referencing this, with
allow_partial_pks=False should not go to database
==

Following is the
pasted output, starting with merge():


 

merged=session.merge(o)
   

2010-05-06 09:44:49,648 INFO sqlalchemy.engine.base.Engine.0x...5790
SELECT USER FROM DUAL
2010-05-06 09:44:49,652 INFO sqlalchemy.engine.base.Engine.0x...5790
{}
2010-05-06 09:44:49,656 INFO sqlalchemy.engine.base.Engine.0x...5790
BEGIN
2010-05-06 09:44:49,657 INFO sqlalchemy.engine.base.Engine.0x...5790
SELECT orders.orderid AS orders_orderid, orderdetails_1.orderid AS
orderdetails_1_orderid, orderdetails_1.lineid AS
orderdetails_1_lineid, orderdetails_1.saleprice AS
orderdetails_1_saleprice
FROM orders, orderdetails orderdetails_1
WHERE orders.orderid = :param_1 AND orders.orderid =
orderdetails_1.orderid(+)
2010-05-06 09:44:49,657 INFO sqlalchemy.engine.base.Engine.0x...5790
{'param_1': u'SALE000'}
 

merged.orderdetails[0].saleprice  # referencing this, with 
allow_partial_pks=False should not go to database
   

2010-05-06 09:44:49,664 INFO sqlalchemy.engine.base.Engine.0x...5790
SELECT orderdetails.saleprice AS orderdetails_saleprice
FROM orderdetails
WHERE orderdetails.orderid IS NULL AND orderdetails.lineid = :param_1
2010-05-06 09:44:49,664 INFO sqlalchemy.engine.base.Engine.0x...5790
{'param_1': 1}
 
   



I think this is related to the ticket 1789 in that saleprice shouldn't
have been expired on a pending instance anyway.

However, I wanted to report this because it seems to me there is
*also* potentially something broken with allow_partial_pks=False in
that, regardless of the expired status, I would have expected the
query to not be issued with a pk that is partially None.

Thanks in advance.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

 
   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: [Distutils] inability to pass setup.py command line arguments to dependency setups

2010-05-07 Thread Kent Bower
Just because there are configuration problems associated with adding a 
feature like the one I needed is absolutely no reason to abandon it when 
it can bring value to the tool if used correctly and in some 
circumstances.  I considered some of those exact complications what if 
it was already installed, etc and with my company's project, where I am 
using this useful tool in a circumstance you may overlook (it is 
perfectly acceptable to have such a feature, *despite* the list of 
complications you mention), such a feature would have been very valuable.


Since it is useful in my case, I understand it would be valuable for 
others as well.


(I don't appreciate the aggressive tone of your reply, though, nor do I 
see how my good faith efforts to help others warrant this... how did I 
possibly offend you with my post??)


On the other hand, I appreciate your correct solution as a good 
approach and I'll forward this idea to an author of SQLAlchemy for his 
consideration.



On 5/7/2010 1:33 PM, Glyph Lefkowitz wrote:

On May 7, 2010, at 9:09 AM, Kent wrote:

   

Consider the case where you want your setup to install third-party
software, but you want/need to pass an argument to the command line
that runs python setup.py --argument install or python setup.py --
argument bdist_egg

As far as I could research, this feature is unavailable.
 

And for good reason, I should think.  I really hope that nobody ever adds this 
feature.

If you require SQLAlchemy to be installed --with-cextensions, then what 
happens when your package is installed in an environment that already has SQLAlchemy 
installed *without* that flag?  Does it stomp on the existing installation?  What if the 
user installed one already with that flag and some other flags as well?  What if it's 
system-installed and you're doing a user-install?

Basically, compile-time and install-time options are a configuration nightmare.  They should 
represent only how and where a package is installed, not what features it has. The correct solution 
to your problem would be to get SQLAlchemy to fix its broken deployment setup and split itself into 
2 packages, SQLAlchemyCExtensions and SQLAlchemy, and then have your 
project depend on both, not to try to cram installer options into the dependency language.

For confirmation of this theory, you need look no further than the excruciating 
user-experience of source-based installation systems with 'variant' support, 
like gentoo's Portage and *BSD's Ports, versus the relatively non-excruciating 
experience of packaging systems which express compile-time options as different 
packages like Yum and Apt.

   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Numeric not consistent between Postgres and Oracle in 0.6

2010-05-06 Thread Kent Bower
The following has changed since 0.5.8 in 0.6.0.  I believe this is 
already known or by design from reading docs and another post but want 
to point out.


If you specify a column such as this:
Column(saleprice, Numeric, nullable=False)
you get a 'numeric' type in PostgreSQL, which supports any type of 
number, integer or decimal.


On the other hand, in Oracle (at least 10g), you get 'NUMBER(38)' which 
is *only* integer.


(If I specify a precision, I don't have this problem, but there may be 
reasons to not want to specify precision.)

So my questions are:

Are we sure we are ok with this inconsistency (it makes it more 
difficult to write software that behaves identically regardless of 
database)?

Is there a replacement type that acts the same regardless of the engine?

Thanks in advance.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] disable RETURNING for specific primary key columns

2010-05-04 Thread Kent Bower

You think of everything? ;)  Thanks

On 5/4/2010 2:41 PM, Michael Bayer wrote:

On May 4, 2010, at 2:30 PM, Kent wrote:

   

I understand I can disable RETURNING for an engine with
'implicit_returning=False'

Is there a way to do this for certain primary key columns only, but
not disabled engine-wide?

(We've done some flexible view creation so that our app (sqlalchemy)
sees the same database metadata whether we are on a legacy oracle
system or a postgres database.  In some cases the 'tables' are views
that don't support RETURNING, so I'd like to turn it off for those
cases only...)
 

Table() accepts it as a keyword argument as well.


   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQL compilation broken in 0.6 for Oracle use_ansi=False

2010-05-03 Thread Kent Bower
Note that this was ok in 0.5.8, so it must have been something related 
to 0.6...


On 5/3/2010 12:04 PM, Michael Bayer wrote:

You can file a ticket for this but note that not everything is possible with 
use_ansi=False.   that mode of operation is somewhat miraculous that it even 
exists.   this may be a trivial issue but I cant assert that until I've had 
time to study it.


On May 3, 2010, at 8:47 AM, Kent wrote:

   

The following script works as expected (also in 0.5.8) with
use_ansi=True.  However, with use_ansi=False, the SQL is ill-formed:

===
from sqlalchemy import *
from sqlalchemy.orm import *

engine = create_engine('oracle://arc:a...@localhost:1521/xe?
use_ansi=False',echo=True)
metadata = MetaData()
Session = sessionmaker(bind=engine)
session = Session()

orders_table = Table(orders, metadata,
Column(orderid, Unicode, primary_key=True)
)

orderdetails_table = Table(orderdetails,metadata,
Column(orderid, Unicode, ForeignKey('orders.orderid'),
primary_key=True),
Column(lineid, Integer, primary_key=True),
Column(saleprice, Numeric, nullable=False),
Column(qtyordered,Numeric)
)

class Order(object):
pass

class OrderDetail(object):
pass

order_mapper = mapper(Order, orders_table,
properties=dict(orderdetails=relation(OrderDetail,
cascade='all,delete-orphan',
single_parent=True,
lazy=False,
backref=backref('parentorder',
cascade='refresh-expire,expunge'

# --- totalsale ---
# note, I needed to add aliases because if you join with these tables
in the rest of the query,
# we need it to be un-ambiguous
od_alias=orderdetails_table.alias('od__a')
order_mapper.add_property('totalsale',
# totalsale is an inline view column
column_property(
select([func.sum(od_alias.c.qtyordered *
od_alias.c.saleprice)],
orders_table.c.orderid==od_alias.c.orderid
).label('totalsale')))


orderdetail_mapper = mapper(OrderDetail, orderdetails_table)

#metadata.create_all(engine)

o=session.query(Order).all()
===



Expected SQL (use_ansi=True):
---
2010-04-30 21:09:10,359 INFO sqlalchemy.engine.base.Engine.0x...9450
SELECT orders.orderid AS orders_orderid, (SELECT sum(od__a.qtyordered
* od__a.saleprice) AS sum_1
FROM orderdetails od__a
WHERE orders.orderid = od__a.orderid) AS totalsale,
orderdetails_1.orderid AS orderdetails_1_orderid,
orderdetails_1.lineid AS orderdetails_1_lineid,
orderdetails_1.saleprice AS orderdetails_1_saleprice,
orderdetails_1.qtyordered AS orderdetails_1_qtyordered
FROM orders LEFT OUTER JOIN orderdetails orderdetails_1 ON
orders.orderid = orderdetails_1.orderid
2010-04-30 21:09:10,360 INFO sqlalchemy.engine.base.Engine.0x...9450
{}


SQL with use_ansi=False:
--
sqlalchemy.exc.DatabaseError: (DatabaseError) ORA-00936: missing
expression
'SELECT orders.orderid AS orders_orderid, (SELECT
sum(od__a.qtyordered * od__a.saleprice) AS sum_1 \nFROM orderdetails
od__a \nWHERE orders.orderid = od__a.orderid AND ) AS totalsale,
orderdetails_1.orderid AS orderdetails_1_orderid,
orderdetails_1.lineid AS orderdetails_1_lineid,
orderdetails_1.saleprice AS orderdetails_1_saleprice,
orderdetails_1.qtyordered AS orderdetails_1_qtyordered \nFROM orders,
orderdetails orderdetails_1 \nWHERE orders.orderid =
orderdetails_1.orderid(+)' {}


Note the  AND ) before AS totalsale is causing the missing
expression database error.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

 
   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQL compilation broken in 0.6 for Oracle use_ansi=False

2010-05-03 Thread Kent Bower

Thank you, sir.


On May 3, 2010, at 7:33 PM, Michael Bayer mike...@zzzcomputing.com  
wrote:


OK the issue here is entirely un-subtle and its a little weird that  
none of the tests we have hit upon it.  this is fixed in tip.



On May 3, 2010, at 12:06 PM, Kent Bower wrote:

Note that this was ok in 0.5.8, so it must have been something  
related to 0.6...


On 5/3/2010 12:04 PM, Michael Bayer wrote:
You can file a ticket for this but note that not everything is  
possible with use_ansi=False.   that mode of operation is somewhat  
miraculous that it even exists.   this may be a trivial issue but  
I cant assert that until I've had time to study it.



On May 3, 2010, at 8:47 AM, Kent wrote:



The following script works as expected (also in 0.5.8) with
use_ansi=True.  However, with use_ansi=False, the SQL is ill- 
formed:


===
from sqlalchemy import *
from sqlalchemy.orm import *

engine = create_engine('oracle://arc:a...@localhost:1521/xe?
use_ansi=False',echo=True)
metadata = MetaData()
Session = sessionmaker(bind=engine)
session = Session()

orders_table = Table(orders, metadata,
  Column(orderid, Unicode, primary_key=True)
)

orderdetails_table = Table(orderdetails,metadata,
  Column(orderid, Unicode, ForeignKey('orders.orderid'),
primary_key=True),
  Column(lineid, Integer, primary_key=True),
  Column(saleprice, Numeric, nullable=False),
  Column(qtyordered,Numeric)
)

class Order(object):
  pass

class OrderDetail(object):
  pass

order_mapper = mapper(Order, orders_table,
  properties=dict(orderdetails=relation(OrderDetail,
  cascade='all,delete-orphan',
  single_parent=True,
  lazy=False,
  backref=backref('parentorder',
  cascade='refresh-expire,expunge'

# --- totalsale ---
# note, I needed to add aliases because if you join with these  
tables

in the rest of the query,
# we need it to be un-ambiguous
od_alias=orderdetails_table.alias('od__a')
order_mapper.add_property('totalsale',
  # totalsale is an inline view column
  column_property(
  select([func.sum(od_alias.c.qtyordered *
od_alias.c.saleprice)],
  orders_table.c.orderid==od_alias.c.orderid
  ).label('totalsale')))


orderdetail_mapper = mapper(OrderDetail, orderdetails_table)

#metadata.create_all(engine)

o=session.query(Order).all()
===



Expected SQL (use_ansi=True):
---
2010-04-30 21:09:10,359 INFO sqlalchemy.engine.base.Engine.0x... 
9450
SELECT orders.orderid AS orders_orderid, (SELECT sum 
(od__a.qtyordered

* od__a.saleprice) AS sum_1
FROM orderdetails od__a
WHERE orders.orderid = od__a.orderid) AS totalsale,
orderdetails_1.orderid AS orderdetails_1_orderid,
orderdetails_1.lineid AS orderdetails_1_lineid,
orderdetails_1.saleprice AS orderdetails_1_saleprice,
orderdetails_1.qtyordered AS orderdetails_1_qtyordered
FROM orders LEFT OUTER JOIN orderdetails orderdetails_1 ON
orders.orderid = orderdetails_1.orderid
2010-04-30 21:09:10,360 INFO sqlalchemy.engine.base.Engine.0x... 
9450

{}


SQL with use_ansi=False:
--
sqlalchemy.exc.DatabaseError: (DatabaseError) ORA-00936: missing
expression
'SELECT orders.orderid AS orders_orderid, (SELECT
sum(od__a.qtyordered * od__a.saleprice) AS sum_1 \nFROM  
orderdetails

od__a \nWHERE orders.orderid = od__a.orderid AND ) AS totalsale,
orderdetails_1.orderid AS orderdetails_1_orderid,
orderdetails_1.lineid AS orderdetails_1_lineid,
orderdetails_1.saleprice AS orderdetails_1_saleprice,
orderdetails_1.qtyordered AS orderdetails_1_qtyordered \nFROM  
orders,

orderdetails orderdetails_1 \nWHERE orders.orderid =
orderdetails_1.orderid(+)' {}


Note the  AND ) before AS totalsale is causing the missing
expression database error.




--
You received this message because you are subscribed to the  
Google Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.







--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy

Re: [sqlalchemy] upgrade to SQLA 0.6

2010-05-01 Thread Kent Bower

May help others:

instead of  col in const.columns

I needed col in list(const.columns)

to avoid: sqlalchemy.exc.ArgumentError: __contains__ requires a string 
argument with the ColumnCollection object




On 4/30/2010 4:16 PM, Michael Bayer wrote:

Kent wrote:
   

I did read 0.6 Migration document.

I was using the contains_column method of ForeignKeyConstraint.
Apparently removed?

AttributeError: 'ForeignKeyConstraint' object has no attribute
'contains_column'

Easy workaround or replacement call?
 

it has a columns collection where you can say col in const.columns.




   

--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.


 
   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] upgrade to SQLA 0.6

2010-05-01 Thread Kent Bower
Wait a second...  ForeignKeyConstraint doesn't but PrimaryKeyConstraint 
does?


(Not trying to be sassy, just wondering why I got the error in the first 
place)




On 5/1/2010 2:11 PM, Michael Bayer wrote:

On May 1, 2010, at 2:04 PM, Kent Bower wrote:

   

May help others:

instead of  col in const.columns

I needed col in list(const.columns)

to avoid: sqlalchemy.exc.ArgumentError: __contains__ requires a string 
argument with the ColumnCollection object
 

dont hate me but const.columns should have a contains_column() method...


   



On 4/30/2010 4:16 PM, Michael Bayer wrote:
 

Kent wrote:

   

I did read 0.6 Migration document.

I was using the contains_column method of ForeignKeyConstraint.
Apparently removed?

AttributeError: 'ForeignKeyConstraint' object has no attribute
'contains_column'

Easy workaround or replacement call?

 

it has a columns collection where you can say col in const.columns.





   

--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.



 


   

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

 
   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] upgrade to SQLA 0.6

2010-05-01 Thread Kent Bower
Oh, I get it, you are saying ColumnCollection has contains_column() 
method.



On 5/1/2010 4:05 PM, Kent Bower wrote:
Wait a second...  ForeignKeyConstraint doesn't but 
PrimaryKeyConstraint does?


(Not trying to be sassy, just wondering why I got the error in the 
first place)




On 5/1/2010 2:11 PM, Michael Bayer wrote:

On May 1, 2010, at 2:04 PM, Kent Bower wrote:


May help others:

instead of  col in const.columns

I needed col in list(const.columns)

to avoid: sqlalchemy.exc.ArgumentError: __contains__ requires a 
string argument with the ColumnCollection object
dont hate me but const.columns should have a contains_column() 
method...






On 4/30/2010 4:16 PM, Michael Bayer wrote:

Kent wrote:


I did read 0.6 Migration document.

I was using the contains_column method of ForeignKeyConstraint.
Apparently removed?

AttributeError: 'ForeignKeyConstraint' object has no attribute
'contains_column'

Easy workaround or replacement call?

it has a columns collection where you can say col in 
const.columns.







--
You received this message because you are subscribed to the Google 
Groups

sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.






--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] upgrade to SQLA 0.6

2010-05-01 Thread Kent Bower

Yeah, this is a bit awkward.

My code used to be able to say:
=
constraint_list = list(tab.constraints)
for cnsts in constraint_list:
if cnsts.contains_column(col):
=

And I can't replace that with this:
=
constraint_list = list(tab.constraints)
for cnsts in constraint_list:
if cnsts.columns.contains_column(col):
=

because cnsts.columns is a ColumnCollection for PrimaryKeyConstraint but 
it is a list for ForeignKeyConstraint.


So, in the end, this works for either:

=
constraint_list = list(tab.constraints)
for cnsts in constraint_list:
if col in list(cnsts.columns):
=




On 5/1/2010 4:06 PM, Kent Bower wrote:
Oh, I get it, you are saying ColumnCollection has contains_column() 
method.



On 5/1/2010 4:05 PM, Kent Bower wrote:
Wait a second...  ForeignKeyConstraint doesn't but 
PrimaryKeyConstraint does?


(Not trying to be sassy, just wondering why I got the error in the 
first place)




On 5/1/2010 2:11 PM, Michael Bayer wrote:

On May 1, 2010, at 2:04 PM, Kent Bower wrote:


May help others:

instead of  col in const.columns

I needed col in list(const.columns)

to avoid: sqlalchemy.exc.ArgumentError: __contains__ requires a 
string argument with the ColumnCollection object
dont hate me but const.columns should have a contains_column() 
method...






On 4/30/2010 4:16 PM, Michael Bayer wrote:

Kent wrote:


I did read 0.6 Migration document.

I was using the contains_column method of ForeignKeyConstraint.
Apparently removed?

AttributeError: 'ForeignKeyConstraint' object has no attribute
'contains_column'

Easy workaround or replacement call?

it has a columns collection where you can say col in 
const.columns.







--
You received this message because you are subscribed to the 
Google Groups

sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.






--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time

2010-04-30 Thread Kent Bower
Thanks in advance for the discussion, hope I can continue without you 
feeling I'm taking too much of your time...


You said

If you are using merge(), and you're trying to use it to figure out 
what's changed, that implies that you are starting with a business 
object containing the old data and are populating it with the new data.



This isn't quite the case.  Rather, I am handed an object (actually JSON 
text representation of an object) how it should look *now*.  I parse 
that object and create a transient SQLA object with the exact same 
properties and relations - recursively, very similar to what merge() is 
doing.


Then I hand that transient object to merge(), who nicely loads the 
original from the database - if it existed -, or creates it if it didn't 
exist - and attaches it to the session for me.  What I am missing is 
what did it look like before.  Certainly, I could call 
session.query.get() before merge() so I would know what it looks like, 
but the main problem with that is matching up all the related objects 
with the post-merged ones, esp for list relations.
Plus, merge() is already looking up these objects for me, so it seemed 
smarter to harness what merge is already doing.


I've looked into the examples you provided. Waiting for before_flush() 
won't work for our use case since we need the old _original object 
available *before* the flush as well as after.  I could...before the 
flush, use attributes.get_history(), but it quickly becomes 
problematic if the programmer needs to look in two completely different 
places, both attributes.get_history() for post-flush changes and also 
_original for pre-flush changes... somewhat defeats the purpose.


I was considering that a SessionExtension def before_merge(session, 
merged) hook would be handy for me... one that is called before the 
properties are updated on merged.  Inside that hook, I could 
potentially clone merged and set it as an _original to itself and 
check if it was newly created by checking session.new (or new_instance 
boolean could be passed to the hook)


However, in the end, I would *still* have the problem that, yes, each 
post-merge object has a detached reference to _original, but these 
cloned objects' relations would not be populated correctly among each 
other  so I couldn't call order._original.calc_points() and expect 
the relations to be populated correctly  that's what I'm trying to 
figure out how to accomplish.. that would be ideal.  Almost like an 
alternate universe snapshot in time of the object (now detached) 
before changes with all relations still populated and where each 
*current* object has reference to its old version.


Could I look it up in a separate session and then merge it into my main 
session?  Then, after that, merge the transient objects into my main 
session?  Would that get me closer?  Ideally, I wouldn't want it to need 
2 trips to the database.



On 4/29/2010 4:48 PM, Michael Bayer wrote:

Kent Bower wrote:
   

It is helpful to know what SQLA was designed for.

Also, you may be interested to know of our project as we are apparently
stretching SQLA's use case/design.  We are implementing a RESTful web
app (using TurboGears) on an already existent legacy database.  Since
our webservice calls (and TurboGears' in general, I believe) are
completely state-less, I think that precludes my being able to harness
descriptors, custom collections, and/or AttributeListeners, and to
trigger the desired calculations before things change because I'm just
handed the 'new' version of the object.
This is precisely why so much of my attention has been on merge()
because it, for the most part, works that out magically.
 

If you are using merge(), and you're trying to use it to figure out
what's changed, that implies that you are starting with a business
object containing the old data and are populating it with the new
data.   Custom collection classes configured on relationship() and
AttributeListeners on any attribute are both invoked during the merge()
process (but external descriptors notably are not).

merge() is nothing more than a hierarchical attribute-copying procedure
which can be implemented externally.   The only advantages merge() itself
offers are the optimized load=False option which you aren't using here,
support for objects that aren't hashable on identity (a rare use case),
and overall a slightly more inlined approach that removes a small amount
of public method overhead.

You also should be able to even recreate the _original object from
attribute history before a flush occurs.   This is what the versioned
example is doing for scalar attributes.


   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group

Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time

2010-04-30 Thread Kent Bower



On 4/30/2010 11:17 AM, Michael Bayer wrote:

Kent Bower wrote:
   

Could I look it up in a separate session and then merge it into my main
session?  Then, after that, merge the transient objects into my main
session?  Would that get me closer?  Ideally, I wouldn't want it to need
2 trips to the database.
 

there's this implication here that you are getting these objects back from
the DB and doing some kind of compare operation.Its true this isn't
much different from doing a get() and then running your compare.  The
only thing that's not there for that is the traversal, but that still
confuses me - if merge() is deep inside of a traversal, and its on
order.orderitems.foo.bar.bat and then you see a difference between the
existing and new bat objects, aren't you then traversing all the way up
to order to set some kind of flag ?   i.e. it seems like no matter what,
your system has some explicit traversal implemented, in this case moving
upwards, but if you were to implement your own merge()-like function, it
would be downwards moving.

   


We are writing software that allows hooks for a system administrator to 
write sqlalchemy-python code, which we compile and save as a BLOB in the 
database.  These are called rules and are loaded depending on the type 
of DB Entity and executed inline.  This is partly what is motivating 
this.  I want a very easy mechanism for these rules to determine - how 
has this object changed? what did this look like before these 
changes?  The system administrators can then do their own changes or 
not allow certain transactions to succeed by throwing errors back from 
the webserver to the client.  It is very flexible - and probably too 
powerful, but we will eventually do something about that hopefully.


This means I can't state all the reasons we might be wanting to compare 
the _original to the current.  I also want to allow the call to 
calculated methods such as the calc_points or calc_volume or whatever.


It also means, I am not traversing back up the chain, because I don't 
know how far up to go.  The same code should kick in if I am saving a 
customer alone versus a customer that is embedded within an order which 
is being saved.  I may want to compare customer.email to 
customer._original.email and do something like unsubscribe the old email 
address.  Therefore it is important, in my mind, that each object 
maintains its own _original reference.  You don't know whether the 
object that was originally merged() was the customer or the order that 
contains the customer, as an example.


I can tell you I've traversed sqlalchemy objects numerous times already 
using the object's mapper and recursion, so I would completely agree in 
seeing the benefit of a traversal algorithm, although now I've done it 
so many times, it isn't too difficult anymore.  One thing I've noticed 
is that the cascade options/load options that live on the mapper aren't 
always universal to our application.  That is to say, if I am querying 
order headers, I want fewer relations eagerly loaded than if I am in an 
order get service call, where I want to pass the client a very deeply 
loaded object.  If I later expire an object, it might be nice to 
optionally follow the same cascading scheme as what originally loaded 
it, instead of what lives on the mapper.
So if I were doing the traversal algorithm, I'd consider two or maybe 
four optional parameters to pass:
eagerloads, lazyloads, (maybe also defers and undefers).  Maybe they are 
renamed as they apply more to cascading than actually loading.  That way 
the cascading of the traversal can be dynamic.


Thanks again.



if the model of how you see this happening is really that you need to
traverse your object deeply and record things in some central point as you
do it, this is best accomplished by using a traversal that is separate
from the internals of merge().

If there were a feature to be added to SQLAlchemy here, it would be a
generic traversal algorithm that anyone can use to create their own
travserse/ merge and/or copy functions, since the traversal along an
object graph of mapped attributes is a standard thing (probably
depth-first and breadth-first would be options).There are components
of this traversal (the methods are called cascade_iterator) already
present but they haven't been packaged into a simple public-facing
presentation.Ideally the internal prop.merge() methods would ride on
top of this system too.   Traversals like these I most prefer to present
as iterators with some system of linking attribute-specific callables
(i.e. visitors).

That said, I don't think implementing your own merge()-like
load-and-traverse is very difficult - which I say only because I don't
have time to create an example for this right now.




   


On 4/29/2010 4:48 PM, Michael Bayer wrote:
 

Kent Bower wrote:

   

It is helpful to know what SQLA was designed for.

Also, you may be interested to know of our project as we

Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time

2010-04-29 Thread Kent Bower
I'm exactly trying to avoid adding esoteric hacks - (why I posted in 
the first place), so thanks for the information.  I'll look into the 
examples you've provided; I'm hopeful that the versioned objects have 
in-tact relations  (such that I could say old_object.orderdetails 
and get that version of the 'orderdetails' relation)


On 4/29/2010 3:59 PM, Michael Bayer wrote:

Michael Bayer wrote:
   

Kent wrote:
 

There might be a communication problem, which an example could help
clarify.

I'm constrained by a legacy database that I have no control over
changing.  When an order is changed, I need to calculate the change in
volume (or points) so I can update a table that records this
information.

Here is an example of a method you might want to run on an the
original, unchanged object (Order, in this case):
=
def calc_points(self):
 return sum(l.product.points * l.qtyordered for l in
self.orderdetails if l.product.points is not None)
=

Notice that this calculation relies on several relations:
orderdetails, and orderdetails[].product

 From an MVC view point, my argument is that certainly this business
logic *should* reside in the Order class.  That is to say, an Order
should know how to calculate its own volume.

Unfortunately, this is not a matter of two or three *fields* that I
can easily extract from attributes.get_history().  This computation,
again, relies on several relations.

Any further insight or advice?
   
 

also in case this is not apparent, this is exactly the kind of thing I do
with SessionExtensions.   For example if you look at
examples/versioning/history_meta.py in the distribution you'd see exactly
this technique using SessionExtension.before_flush() to create new
version entries for objects being modified or deleted.   Here's another
example from a Pycon tutorial in 2009:
http://bitbucket.org/zzzeek/pycon2009/src/tip/chap5/sessionextension.py -
this one modifies a count attribute on a parent table in response to
changes in a child collection, using after_flush().

we've hopefully implemented enough hooks into mapper and session to allow
any possible change-based operation.  there should be no need add esoteric
hacks into methods like merge().




   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time

2010-04-29 Thread Kent Bower

It is helpful to know what SQLA was designed for.

Also, you may be interested to know of our project as we are apparently 
stretching SQLA's use case/design.  We are implementing a RESTful web 
app (using TurboGears) on an already existent legacy database.  Since 
our webservice calls (and TurboGears' in general, I believe) are 
completely state-less, I think that precludes my being able to harness 
descriptors, custom collections, and/or AttributeListeners, and to 
trigger the desired calculations before things change because I'm just 
handed the 'new' version of the object.
This is precisely why so much of my attention has been on merge() 
because it, for the most part, works that out magically.


In this case, though, it would be nice to have merge() package up some 
extra information, namely the object as it looks in the database prior 
to its magic.


Thanks again.


On 4/29/2010 3:47 PM, Michael Bayer wrote:

Kent wrote:
   

There might be a communication problem, which an example could help
clarify.

I'm constrained by a legacy database that I have no control over
changing.  When an order is changed, I need to calculate the change in
volume (or points) so I can update a table that records this
information.

Here is an example of a method you might want to run on an the
original, unchanged object (Order, in this case):
=
def calc_points(self):
 return sum(l.product.points * l.qtyordered for l in
self.orderdetails if l.product.points is not None)
=

Notice that this calculation relies on several relations:
orderdetails, and orderdetails[].product

 From an MVC view point, my argument is that certainly this business
logic *should* reside in the Order class.  That is to say, an Order
should know how to calculate its own volume.

Unfortunately, this is not a matter of two or three *fields* that I
can easily extract from attributes.get_history().  This computation,
again, relies on several relations.

Any further insight or advice?
 

the general idea to this type of thing is to listen for events using
descriptors, custom collections, and/or AttributeListeners, and to trigger
the desired calculations before things change.  SQLAlchemy internals are
only intended to implement as much as is needed to persist changes to the
database.

Such as here, if you need to run calc_points() everytime the contents of
the orderdetails collection changes, a simple AttributeListener
implementing append/remove can achieve that.  But also, a custom list
class which references the parent and overrides append() and __setitem__()
would do it to, i.e. its not like any of this would be impossible if you
weren't using an ORM.   The ORM's attribute logic is designed only for the
purposes of persisting changes to a database.   That we expose some of its
details for the purpose of business logic is only intended to be to the
degree that this business logic is explicitly concerned with the details
of the object's persistence.   Things that aren't related to the
mechanisms of persistence aren't within the domain of SQLA and can be
implemented using regular Python techniques.







   




On Apr 29, 12:57 pm, Michael Bayermike...@zzzcomputing.com  wrote:
 

if you have a method like 'calculate_total_volume()', that is a business
method.  you should not be relying upon the internals of the ORM to
figure
that out for you, and you should have two distinct fields on your object
to represent the two values you need to make that calculation.



Kent wrote:
   

That sounds like it could be very useful for me, thank you for
pointing me there.  That could solve one of the two issues I'm facing
that I listed... what about the other?
 
   

On Apr 29, 11:02 am, Michael Bayermike...@zzzcomputing.com  wrote:
 

Kent wrote:
   

Before saving objects to the database, we have need to inspect the
changes.  I am aware of the attributes.get_history() functionality,
which is helpful to a point.
 
   

attributes.get_history() falls short of what I need in two places:
*** after a session.flush(), it is gone.  There are times we need
 

to
   

flush, but the transaction has still not been committed, and more
processing is required, so we still need access to the history of
changes for this object in this transaction despite a call to
 

flush().
   

*** if there are calculation methods I've written that objects
 

have, I
   

need to be able to call these for the old object.  For example,
 

say
   

I write a method on Order class such as def
calculate_total_volume(self):  When an order is saved I need to
compare the old volume to the new.  On the merged order, I can call
merged_obj.calculate_total_volume(), but I would need to use the
information in attributes.get_history() to *recreate* the old

Re: [sqlalchemy] Re: use_ansi oracle sysdate vs. current_date

2010-04-02 Thread Kent Bower

Thanks very much.

On 4/2/2010 5:41 PM, Michael Bayer wrote:

Kent wrote:
   

Along the same lines, is there something we can do about nvl()
(oracle) versus coalesce() (ansi)?

They aren't exactly the same, unfortunately (nvl takes exactly 2
arguments, no more), so maybe there is nothing 'official' you can do,
but can you help me work it out for my project?

I assume it is very similar to what you helped me out with
above...something like this:


from sqlalchemy.sql.expression import ColumnElement
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import DateTime, Date

class current_date(ColumnElement):
 type = Date()

@compiles(current_date)
def _compiler_dispatch(element, compiler, **kw):
 if compiler.dialect.name == 'oracle':
 if compiler.dialect.use_ansi:
 return trunc(current_date)
 else:
 return trunc(sysdate)
 else:
 # assume postgres
 return current_date



But the main difference is it takes arguments.

Is there a clever way to return the appropriate function, something
like this:

def 
 if compiler.dialect.name == 'oracle':
 return func.nvl
 else:
 # assume postgres
 return func.coalesce
 

I will add this to the docs:


from sqlalchemy import *
from sqlalchemy.ext.compiler import compiles

from sqlalchemy.sql.expression import FunctionElement


class coalesce(FunctionElement):
 name = 'coalesce'

@compiles(coalesce)
def compile(element, compiler, **kw):
 return coalesce(%s) % compiler.process(element.clauses)


@compiles(coalesce, 'oracle')
def compile(element, compiler, **kw):
 if len(element.clauses)  2:
 raise TypeError(coalesce only supports two arguments on Oracle)
 return nvl(%s) % compiler.process(element.clauses)

print coalesce(5, 6)

from sqlalchemy.dialects import oracle
print coalesce(5, 6).compile(dialect=oracle.dialect())

   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: use_ansi oracle sysdate vs. current_date

2010-04-02 Thread Kent Bower

As an aside, more recent Oracles support ansi coalesce,
so it is probably more appropriate to only use nvl() for older, non-ansi 
Oracle versions, but if this is only for illustration then that is not a 
big deal (unless you don't want people writing to you saying coalesce 
does support more than 2 arguments.


In my case, I'd rather rework the nvl() function to work on other 
engines besides Oracle as an alias to coalesce(), seems less ambiguous.


Thanks again.



On 4/2/2010 5:41 PM, Michael Bayer wrote:

Kent wrote:
   

Along the same lines, is there something we can do about nvl()
(oracle) versus coalesce() (ansi)?

They aren't exactly the same, unfortunately (nvl takes exactly 2
arguments, no more), so maybe there is nothing 'official' you can do,
but can you help me work it out for my project?

I assume it is very similar to what you helped me out with
above...something like this:


from sqlalchemy.sql.expression import ColumnElement
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import DateTime, Date

class current_date(ColumnElement):
 type = Date()

@compiles(current_date)
def _compiler_dispatch(element, compiler, **kw):
 if compiler.dialect.name == 'oracle':
 if compiler.dialect.use_ansi:
 return trunc(current_date)
 else:
 return trunc(sysdate)
 else:
 # assume postgres
 return current_date



But the main difference is it takes arguments.

Is there a clever way to return the appropriate function, something
like this:

def 
 if compiler.dialect.name == 'oracle':
 return func.nvl
 else:
 # assume postgres
 return func.coalesce
 

I will add this to the docs:


from sqlalchemy import *
from sqlalchemy.ext.compiler import compiles

from sqlalchemy.sql.expression import FunctionElement


class coalesce(FunctionElement):
 name = 'coalesce'

@compiles(coalesce)
def compile(element, compiler, **kw):
 return coalesce(%s) % compiler.process(element.clauses)


@compiles(coalesce, 'oracle')
def compile(element, compiler, **kw):
 if len(element.clauses)  2:
 raise TypeError(coalesce only supports two arguments on Oracle)
 return nvl(%s) % compiler.process(element.clauses)

print coalesce(5, 6)

from sqlalchemy.dialects import oracle
print coalesce(5, 6).compile(dialect=oracle.dialect())

   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: use_ansi oracle sysdate vs. current_date

2010-04-02 Thread Kent Bower

By the way,
Unless a ClauseList is subscriptable in 0.6, I had problems the way 
it was.


Here is what I did:

@compiles(coalesce, 'oracle')
def compile(element, compiler, **kw):
sql = nvl(%s)
clauses = map(compiler.process, element.clauses)
for i in xrange(len(clauses) - 2):
sql %= %s, nvl(%%s) % clauses[i]
return sql % , .join(clauses[-2:])



On 4/2/2010 7:08 PM, Ian Kelly wrote:
On Fri, Apr 2, 2010 at 3:41 PM, Michael Bayer 
mike...@zzzcomputing.com mailto:mike...@zzzcomputing.com wrote:


Kent wrote:
 Along the same lines, is there something we can do about nvl()
 (oracle) versus coalesce() (ansi)?

 They aren't exactly the same, unfortunately (nvl takes exactly 2
 arguments, no more), so maybe there is nothing 'official' you
can do,
 but can you help me work it out for my project?

 I assume it is very similar to what you helped me out with
 above...something like this:

 
 from sqlalchemy.sql.expression import ColumnElement
 from sqlalchemy.ext.compiler import compiles
 from sqlalchemy.types import DateTime, Date

 class current_date(ColumnElement):
 type = Date()

 @compiles(current_date)
 def _compiler_dispatch(element, compiler, **kw):
 if compiler.dialect.name http://compiler.dialect.name ==
'oracle':
 if compiler.dialect.use_ansi:
 return trunc(current_date)
 else:
 return trunc(sysdate)
 else:
 # assume postgres
 return current_date
 


 But the main difference is it takes arguments.

 Is there a clever way to return the appropriate function, something
 like this:

 def 
 if compiler.dialect.name http://compiler.dialect.name ==
'oracle':
 return func.nvl
 else:
 # assume postgres
 return func.coalesce

I will add this to the docs:


from sqlalchemy import *
from sqlalchemy.ext.compiler import compiles

from sqlalchemy.sql.expression import FunctionElement


class coalesce(FunctionElement):
   name = 'coalesce'

@compiles(coalesce)
def compile(element, compiler, **kw):
   return coalesce(%s) % compiler.process(element.clauses)


@compiles(coalesce, 'oracle')
def compile(element, compiler, **kw):
   if len(element.clauses)  2:
   raise TypeError(coalesce only supports two arguments on
Oracle)
   return nvl(%s) % compiler.process(element.clauses)

print coalesce(5, 6)

from sqlalchemy.dialects import oracle
print coalesce(5, 6).compile(dialect=oracle.dialect())


Might this work as a more complete solution for Oracle?

@compiles(coalesce, 'oracle')
def compile(element, compiler, **kw):
   sql = nvl(%s)
   for i in xrange(len(element.clauses) - 2):
  sql %= %s, nvl(%%s) % compiler.process(element.clauses[i:i+1])
   return sql % compiler.process(element.clauses[-2:])

Ian
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: passing arguments to complicated columns

2010-04-01 Thread Kent Bower

Thanks for the info.
Still wondering, is there a way to tell the orm (a join criterion or a  
binary expression) use this function to find the param for the bind?


On Apr 1, 2010, at 6:11 PM, Michael Bayer mike...@zzzcomputing.com  
wrote:



Kent wrote:

I believe (correct me if you know better) that an line subquery is
quite a bit less efficient than a join.  That was my motivation for
avoiding that because there are so many rows and we've already seen
performance problems with our queries...


with a high quality planner, it should not be.   in any case, you're
looking for an aggregate value (the lowest).  So without fetching many
rows, a subquery is unavoidable, though you have a choice whether  
you'd

like it correlated within the columns query (the normal way), or as an
additional selectable in the FROM clause (avoids correlation).



In the end I'd like the object to have a saleprice attribute  
(either
column_property or, if possible, association_proxy) that is the  
single

saleprice for the given date and store.

When using the association proxy, can I somehow specify the first  
item

in the list?


no, if you truly wanted to retrieve the full list of related items and
display the price of the lowest, you'd use an ordinary method on your
class to do that.

--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Sequences support for CYCLE and MIN/MAX values

2010-03-26 Thread Kent Bower

Thanks for the info.

Since it is NOCYCLE in oracle and NO CYCLE in postgres, I would check 
the engine.dialect.name in the compile, method correct?


if eng.dialect.name == 'oracle':
sql +=  NOCYCLE
elif eng.dialect.name == 'postgres':
sql +=  NO CYCLE
else:
raise Exception(RSequence is only implemented 
for Oracle and PostgreSQL!)


How do I get a hold of the engine from within a Sequence object?


On 3/26/2010 2:26 PM, Michael Bayer wrote:

Kent wrote:
   

Any plans to support MINVALUE, MAXVALUE, CYCLE, NOCYCLE for sequences
(for both postgres and oracle)?

I've implemented a subclass of Sequence myself, but it isn't very
elegant, because I'm not familiar enough with the code to know which
methods to override for create() output.
 

correction:  redefine the compilation for CreateSequence:

from sqlalchemy import *
from sqlalchemy import schema
from sqlalchemy.ext.compiler import compiles


class MySeq(Sequence):
 def __init__(self, *args, **kw):
 self.cycle = kw.pop('cycle', False)
 super(MySeq, self).__init__(*args, **kw)

@compiles(schema.CreateSequence)
def compile(element, compiler, **kw):
 if isinstance(element.element, MySeq):
 return CREATE SEQUENCE %s %s % (element.element.name,
element.element.cycle and CYCLE or )
 else:
 return compiler.visit_create_sequence(element)





   

--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.


 
   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Session user data

2010-03-23 Thread Kent Bower

Excellent.  Thank for the example.

Questions:
1. I assume not, but want to ask: is there anything special about the  
attribute '_cache' you've added to the session object?  I could add  
any such attribute?


2. Will this work fine with scoped sessions?  (these objects aren't  
recycled are they?)


On Mar 22, 2010, at 9:48 AM, Michael Bayer mike...@zzzcomputing.com  
wrote:




On Mar 22, 2010, at 9:06 AM, Kent wrote:


I have a need to pass around session-dependent application data, such
as strong references to certain objects that I don't want garbage
collected so they will serve as a cache for the duration of the
session.  For example, if the application performs a get() on a
SystemParameters table, I want a strong reference to that object  
for
the duration of the session so that subsequent get()s do not  
reference

the database.

Further, I don't wish to pass such a variable all over through-out my
session to all function calls, etc.

I was considering adding an attribute to the (scoped) session for  
this

purpose at run time (I wish to avoid modifying the thirdparty
source).  Something that will automatically be torn down when the
(scoped) session is.

Can you recommend an approach or better idea?



subclass Query and setup caching options.  There's examples in  
the 0.6 distro that use Beaker including one that assigns the cache  
to be local to the Session, but attached is a simpler version from  
my pycon tutorial that doesn't have any dependencies.







Thanks very much again.

--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




query_subclass.py
--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Refresh() for update

2010-03-19 Thread Kent Bower
By the way, must get() always accept a primary key?  For composite  
keys is there a shortcut to access its ident (in the correct  
order)?  (So I could get the ident tuple and pass it to get() with  
populate_existing())


On Mar 19, 2010, at 1:12 PM, Michael Bayer mike...@zzzcomputing.com  
wrote:




On Mar 19, 2010, at 12:34 PM, Kent wrote:


With query() I can add with_lockmode('for update'). Can I do so on a
session.refresh()? What about a get()?


a get() yes.  a refresh() no, but that's a fine idea so I've  
committed a flag for that in rab5a31b4f3bf.   A substitute is  
query.populate_existing().with_lockmode('for_update').get(object.id).





I see this as a useful approach to a two phase large query, where one
wants to avoid locking a large number of rows.  First you would issue
a non-locking query and then, once you have a candidate object you
could call refresh with a lock, so you know you have the most recent
data and that it is locked.

--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.