Re: [sqlalchemy] SAWarnings when using history_meta.py versioning and Inheritance.

2014-10-17 Thread Mike Bayer

On 10/17/2014 02:52 PM, JPLaverdure wrote:
> Hello,
>
> It seems a number of SAWarnings are being thrown whenever I
> instantiate Versioned objects which make use of inheritance:
>
> |
> SAWarning:Implicitlycombining column container_history.changed
> withcolumn barcoded_container_history.changed under attribute 'changed'. 
> Pleaseconfigure one ormore attributes forthese same-named columns
> explicitly.
> prop =self._property_from_column(key,prop)
> |
>
> Unfortunately, since these objects are instantiated "auto-magically"
> by the Versioned mixin class, I can't see a way to make these go away
> or address the issue.
> I tried looking into the history_meta.py source and cannot understand
> why this same warning is not being thrown for the "version" attribute.
>
> Anyone has an idea ?

this is a newer warning which refers to the fact that these two columns
are separate but placed under the same attribute.  in some cases, people
didn't expect this to happen so hence a warning.   the recipe might be
taking this into account for "version", not sure.  it can be fixed in
the recipe but the warning should only be raised at mapping time, not at
instantiation time.





>
> Thanks !
>
> JP
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Long transactions with SQLite (and of course locks)

2015-03-31 Thread Mike Bayer


On 3/31/15 7:23 PM, Eric Smith wrote:
> Environment: SQLAlchemy 0.9.7, SQLite back-end, autoflush=True, mostly
> using scoped session, multi-platform Python 2.7
>
> I'm working on a multi-threaded app, but with fairly low concurrency
> (user actions and scheduled tasks are on different threads, for
> example). I think I have a fair understanding of SQLite's locking
> model, the implications of having autoflush=True, etc. but I still
> occasionally get the "OperationError database is locked". 
>
> The challenge with this error is that it is raised on a thread that
> could be doing everything right while some other thread is keeping a
> transaction open for too long. Plus, unless another thread tries to
> write, the "misbehaving" thread goes mostly unnoticed.
>
> So I thought I'd try to write a "long transaction detector" that would
> start a timer when when a flush happens, and cancel it if a commit or
> rollback happens. If the timer expires (after a few seconds), a
> warning is logged with the stack trace of the flush. The idea is that
> I can see information about the misbehaving thread rather than the
> innocent victim, and I'll see problems even without actual contention.
>
> While I'm still experimenting to see if this strategy will work, it
> looks like I'm getting some false positives. It seems like sometimes I
> get an after_flush event even when there are no modifications to the
> database.
A flush can proceed when objects are marked as dirty, but in some cases
it turns out the changes on those objects are not net changes; e.g. an
object attribute was set to a new value that matches the old one. 
that's the case where you might see a flush that doesn't actually do
anything.

As far as database is locked, I'd just increase the timeout (Pysqlite
setting, defaults to 5 seconds), so that the locking acts just like any
other mutex that you'd place into your application.  

As far as "SQLite's locking model", note that Pysqlite has extremely
specific behavior here which makes the locking model much more
forgiving.  It only locks the database when the SQL on the current
transaction turns into DML (e.g. INSERT / UPDATE / DELETE).   So it
should not be hard to write the app such that things that actually emit
DML aren't taking up that much time.   Note that when "rollback" or
"commit" happens, the transaction is over, nothing is locked.

If the app truly needs DML on multiple threads lots of the time and
you're getting a lot of contention (even with Pysqlite's forgiving
model), then SQLite isn't appropriate for that level of concurrency.



> This is based on having the engine logging turned on so I can see the
> SQL being emitted.  I'll see something like:
>
> BEGIN (implicit)
> SELECT
> SELECT
> ...
> after_flush event
> SELECT
> SELECT
> ...
> My timer expires
>
> Code is below -- the class is passed as the class_ parameter to
> sessionmaker. 
>
> Am I doing something dumb? Is this a reasonable strategy for my goals?
> Any ideas on the false positive?
>
> Thanks,
> Eric
>
>
>
> class LongTransactionDetector(sqlalchemy.orm.Session):
>
> TIMEOUT = 3
>
> def __init__(self, *args, **kwargs):
> super(LongTransactionDetector, self).__init__(*args, **kwargs)
> self._timer = None
> self._stack = None
> self._flush_time = None
> event.listen(self, 'after_flush', self._after_flush)
> event.listen(self, 'after_commit', self._after_commit)
> event.listen(self, 'after_rollback', self._after_rollback)
> event.listen(self, 'after_soft_rollback',
> self._after_soft_rollback)
>
> def close(self):
> # Calling close on a session automatically emits a ROLLBACK,
> but we
> # don't seem to get an event for it, so we'll just hook it here.
> super(LongTransactionDetector, self).close()
> self._stop_timing()
>
> def _after_flush(self, session, flush_context):
> if any([session.new, session.dirty, session.deleted]):
> self._begin_timing()
>
> def _after_commit(self, session):
> self._stop_timing()
>
> def _after_rollback(self, session):
> self._stop_timing()
>
> def _after_soft_rollback(self, session, previous_transaction):
> self._stop_timing()
>
> def _begin_timing(self):
> if self._timer is None:
> logger.debug('set transaction timer')
> self._flush_time = datetime.datetime.now()
> self._stack = traceback.extract_stack()
> self._thread = threading.current_thread()
> self._timer = threading.Timer(self.TIMEOUT, self._on_timer)
> self._timer.start()
>
> def _stop_timing(self):
> if self._timer:
> logger.debug('clear transaction timer')
> self._timer.cancel()
> self._timer = None
> self._stack = None
>
> def _on_timer(self):
> trace = ''.join(traceback.format_list(self._stack))
> logger.warning('long t

Re: [sqlalchemy] Inherited class column override

2015-04-01 Thread Mike Bayer


On 4/1/15 4:55 AM, Pierre B wrote:
> Unfortunately I'm inheriting the relational model from an old
> application. I have dozens of tables using a single junction table for
> associations.
> I can not completely redesign my relational model because it needs to
> be compatible with the old application.
I was asking no such thing.  I only ask that you consider the relational
model when building *new* elements of the application.   If these models
are in fact mapping to an existing schema, I find it surprising that
your existing database schema includes *two* foreign key constraints
present on each of people4l2.id1 and people4l2.id2, constraining each
column to both left1.id/left2.id and right1.id/right2.id.   




> At this point, I think my best option is setting up table inheritance
> at the database level (database is Postgresql) and migrating records
> into children tables. Minimal code refactoring would be involved in
> the old application and it would be possible to use the association
> object pattern.
>
> On Tuesday, March 31, 2015 at 8:05:19 PM UTC+2, Michael Bayer wrote:
>
>
>
> Pierre B > wrote:
>
> > I tried using the association object pattern before but can't
> get it to work because I use the same id1 and id2 columns for all
> foreign keys and I'm not able to override them in the sub-classes
> ("conflicts with existing column" error).
> > class MyClass(HasSomeAttribute, db.Model):
> >__tablename__ = 'people4l2'
> >id = db.Column(db.Integer, primary_key=True)
> >
> > class MySubClass1(MyClass):
> >right1_id = db.Column('id2', db.Integer,
> ForeignKey('right1.id '))
> >left1_id = db.Column('id1', db.Integer, ForeignKey('left1.id
> '))
> >
> > class MySubClass2(MyClass):
> >right2_id = db.Column('id2', db.Integer,
> ForeignKey('right2.id '))
> >left2_id = db.Column('id1', db.Integer, ForeignKey('left2.id
> ’))
>
> That’s because you do not have a __tablename__ for these
> subclasses, so when
> you put a column on the subclass, that is physically a column on the
> ‘people4l2’ table; the names cannot be conflicting. Also, it is
> not possible
> to have a column named “people4l2.id2” which is in some cases a
> foreign key
> to “right1.id ” and in other cases to “right2.id
> ”.
>
> This probably all seems very complicated if you only think of it
> in terms of
> a Python object model. That’s why it is essential that you design
> your
> database schema in terms of database tables, and how those tables
> will work
> within a purely relational model, without Python being involved,
> first.
>
> For simple cases, the design of the relational model and the
> object model
> are so similar that this explicit step isn’t necessary, but once
> the goals
> become a little bit divergent between relational and object model,
> that’s
> when the relational model has to be developed separately, up
> front. This is
> the essence of how SQLAlchemy works, which becomes apparent the
> moment you
> get into models like these which are typically impossible on most
> other
> ORMs, since most ORMs do not consider design of the relational
> model as
> separate from the object model.
>
> The tradeoff here is basically between “more work with SQLAlchemy”
> vs.
> “not possible at all with other ORMs”  :)
>
> The relational model is the more rigid part of the system here, so
> you have to
> work that part out first; then determine how you want to map the
> Python
> object model on top of the relational model.
>
> > On Tuesday, March 31, 2015 at 4:29:52 PM UTC+2, Michael Bayer
> wrote:
> >
> >
> > Pierre B  wrote:
> >
> > > Here's my use case:
> > > right1 = Right()
> > > right.left = Left()
> > >
> > > right2 = Right2()
> > > right2.left = Left2()
> > >
> > > db.session.add(right) // automatically create the junction
> using MySubClass1 and set the type field to 1
> > > db.session.add(right2) // automatically create the junction
> using MySubClass1 and set the type field to 2
> > > db.session.commit()
> > >
> > > Basically I have a junction table associating a bunch of
> different tables in my model.
> > > I want to abstract that mechanism using relationships and
> polymorphism so that I don't have to deal with that junction table
> while coding.
> > > The relationships I created allow me to not have to deal with
> it while selecting records but I can't get it to set the type
> field while inserting records.
> >
> > OK, you are using the association object pattern. You cannot use
> “secondary”
> > in the way that you are doing here. You need

Re: [sqlalchemy] Long transactions with SQLite (and of course locks)

2015-04-01 Thread Mike Bayer


On 4/1/15 12:08 PM, Eric Smith wrote:
> Thanks Michael -- I appreciate how responsive you are to questions.
>
> Yes, it would have been more precise to talk about pysqlite's locking
> model rather than SQLite's. I'm hesitant to increase the lock timeout
> because I don't think I want any transactions that are that long, and
> would rather root out any that are.

there's a super easy way to achieve that, as far as just being able to
identify them; having the whole app recover gracefully I'm not as sure
about. Just set a timer for the length of time from before_flush()
to the after_commit() event in the Session, and if the timer is > N,
raise an exception.   That will at least identify where these things are
happening.


>
> To address the false positives, I'm looking at a different approach
> this morning. Since the engine log seems to match precisely what is
> going on with the database (the SQL being sent), why not put in a
> logging Filter instead of using events? Then I'd have something like:
>
> def filter(self, record):
> first_word = record.msg.split(' ')[0]
> if first_word in ['INSERT', 'UPDATE', 'DELETE']:
> self._begin_timing()
> elif first_word in ['COMMIT', 'ROLLBACK']:
> self._stop_timing()
> return True
>
> Of course it only works if logging is on, but this is kind of a
> debugging tool anyway.

Again, just use events.  The Core also has statement execute
(before_execute, before_cursor_execute) and transaction events (begin,
commit, rollback) that you can intercept.   





>
>
> On Tuesday, March 31, 2015 at 9:38:35 PM UTC-6, Michael Bayer wrote:
>
>
>
> On 3/31/15 7:23 PM, Eric Smith wrote:
>> Environment: SQLAlchemy 0.9.7, SQLite back-end, autoflush=True,
>> mostly using scoped session, multi-platform Python 2.7
>>
>> I'm working on a multi-threaded app, but with fairly low
>> concurrency (user actions and scheduled tasks are on different
>> threads, for example). I think I have a fair understanding of
>> SQLite's locking model, the implications of having
>> autoflush=True, etc. but I still occasionally get the
>> "OperationError database is locked". 
>>
>> The challenge with this error is that it is raised on a thread
>> that could be doing everything right while some other thread is
>> keeping a transaction open for too long. Plus, unless another
>> thread tries to write, the "misbehaving" thread goes mostly
>> unnoticed.
>>
>> So I thought I'd try to write a "long transaction detector" that
>> would start a timer when when a flush happens, and cancel it if a
>> commit or rollback happens. If the timer expires (after a few
>> seconds), a warning is logged with the stack trace of the flush.
>> The idea is that I can see information about the misbehaving
>> thread rather than the innocent victim, and I'll see problems
>> even without actual contention.
>>
>> While I'm still experimenting to see if this strategy will work,
>> it looks like I'm getting some false positives. It seems like
>> sometimes I get an after_flush event even when there are no
>> modifications to the database.
> A flush can proceed when objects are marked as dirty, but in some
> cases it turns out the changes on those objects are not net
> changes; e.g. an object attribute was set to a new value that
> matches the old one.  that's the case where you might see a flush
> that doesn't actually do anything.
>
> As far as database is locked, I'd just increase the timeout
> (Pysqlite setting, defaults to 5 seconds), so that the locking
> acts just like any other mutex that you'd place into your
> application.  
>
> As far as "SQLite's locking model", note that Pysqlite has
> extremely specific behavior here which makes the locking model
> much more forgiving.  It only locks the database when the SQL on
> the current transaction turns into DML (e.g. INSERT / UPDATE /
> DELETE).   So it should not be hard to write the app such that
> things that actually emit DML aren't taking up that much time.  
> Note that when "rollback" or "commit" happens, the transaction is
> over, nothing is locked.
>
> If the app truly needs DML on multiple threads lots of the time
> and you're getting a lot of contention (even with Pysqlite's
> forgiving model), then SQLite isn't appropriate for that level of
> concurrency.
>
>
>
>> This is based on having the engine logging turned on so I can see
>> the SQL being emitted.  I'll see something like:
>>
>> BEGIN (implicit)
>> SELECT
>> SELECT
>> ...
>> after_flush event
>> SELECT
>> SELECT
>> ...
>> My timer expires
>>
>> Code is below -- the class is passed as the class_ parameter to
>> sessionmaker. 
>>
>> Am I doing something dumb? Is this a reasonable strategy for my
>> goals? Any ideas on the f

Re: [sqlalchemy] Inherited class column override

2015-04-01 Thread Mike Bayer


On 4/1/15 10:28 AM, Pierre B wrote:
> Here's a simple visual of the schema

OK, so that's called a polymorphic foreign key.  SQLAlchemy doesn't have
first class support for this concept because it's relationally
incorrect, but there is an example at
http://docs.sqlalchemy.org/en/rel_0_9/_modules/examples/generic_associations/generic_fk.html
which shows one way to produce this effect.The key aspects to this
in reference to your model attempts are that there are no ForeignKey
objects; objects like Column and ForeignKey are schema-level objects,
and if you construct one, that implies it exists in the schema.  That's
why you can't make two Column objects with the same name pointing to the
same table, and this is what I'm referring to when I say that the schema
has to be considered when building out these declarations.

The techniques to make the relationship here involve using the
"primaryjoin" argument to establish how the tables join directly, as
well as the "foreign()" annotation and/or "foreign_keys" argument which
you can see used in the example; that is, how the tables join is
constructed using all ORM constructs and not schema-level constructs.



>
> 
>
> There are no foreign key constraints in the database schema, id1 and
> id2 are just stored there, a type column is used to retrieve records
> e.g type equals B for an association between RightB and ReftB and
> equals A between RightA and LeftA. That is why I'm trying to set a
> default value for the type column so I don't have to deal with that
> junction table when inserting records.
>
>
>
> On Wednesday, April 1, 2015 at 4:11:03 PM UTC+2, Michael Bayer wrote:
>
>
>
> On 4/1/15 4:55 AM, Pierre B wrote:
>> Unfortunately I'm inheriting the relational model from an old
>> application. I have dozens of tables using a single junction
>> table for associations.
>> I can not completely redesign my relational model because it
>> needs to be compatible with the old application.
> I was asking no such thing.  I only ask that you consider the
> relational model when building *new* elements of the
> application.   If these models are in fact mapping to an existing
> schema, I find it surprising that your existing database schema
> includes *two* foreign key constraints present on each of
> people4l2.id1 and people4l2.id2, constraining each column to both
> left1.id/left2.id  and
> right1.id/right2.id .   
>
>
>
>
>> At this point, I think my best option is setting up table
>> inheritance at the database level (database is Postgresql) and
>> migrating records into children tables. Minimal code refactoring
>> would be involved in the old application and it would be possible
>> to use the association object pattern.
>>
>> On Tuesday, March 31, 2015 at 8:05:19 PM UTC+2, Michael Bayer wrote:
>>
>>
>>
>> Pierre B  wrote:
>>
>> > I tried using the association object pattern before but
>> can't get it to work because I use the same id1 and id2
>> columns for all foreign keys and I'm not able to override
>> them in the sub-classes ("conflicts with existing column"
>> error).
>> > class MyClass(HasSomeAttribute, db.Model):
>> >__tablename__ = 'people4l2'
>> >id = db.Column(db.Integer, primary_key=True)
>> >
>> > class MySubClass1(MyClass):
>> >right1_id = db.Column('id2', db.Integer,
>> ForeignKey('right1.id '))
>> >left1_id = db.Column('id1', db.Integer,
>> ForeignKey('left1.id '))
>> >
>> > class MySubClass2(MyClass):
>> >right2_id = db.Column('id2', db.Integer,
>> ForeignKey('right2.id '))
>> >left2_id = db.Column('id1', db.Integer,
>> ForeignKey('left2.id ’))
>>
>> That’s because you do not have a __tablename__ for these
>> subclasses, so when
>> you put a column on the subclass, that is physically a column
>> on the
>> ‘people4l2’ table; the names cannot be conflicting. Also, it
>> is not possible
>> to have a column named “people4l2.id2” which is in some cases
>> a foreign key
>> to “right1.id ” and in other cases to
>> “right2.id ”.
>>
>> This probably all seems very complicated if you only think of
>> it in terms of
>> a Python object model. That’s why it is essential that you
>> design your
>> database schema in terms of database tables, and how those
>> tables will work
>> within a purely relational model, without Python being
>> involved, first.
>>
>> For simple cases, the design of the relational model and the
>> object model
>> 

Re: [sqlalchemy] Query.delete() ignores outerjoin

2015-04-01 Thread Mike Bayer


On 4/1/15 1:21 PM, Simon Beertree wrote:
> Hi all,
>
> I have two classes, Product and Price, that have a many-to-many
> relation through a secondary table (see attached test file). I'm
> trying to delete all Prices that are not related to any Product. I
> would expect the last query in the file to produce something like
>
> DELETE prices FROM prices LEFT OUTER JOIN current_prices ON prices.id
> = current_prices.price_id WHERE current_prices.product_id IS NULL

SQLAlchemy has no support for this syntax.  See issue
https://bitbucket.org/zzzeek/sqlalchemy/issue/959/support-mysql-delete-from-join,
as well as the just-today issue
https://bitbucket.org/zzzeek/sqlalchemy/issue/3349/queryjoin-is-silently-ignored-within.


>
>
> I'm seeing this on 1.0.0b4 and on 0.9.8 with Python 2.7.3 on Debian 7,
> in sqlite as well as on MySQL. What am I doing wrong?
This syntax is a non-standard syntax specific to MySQL.  It will not
work in SQLite (see http://sqlite.org/lang_delete.html - no JOIN there).



>
> Thanks, Simon
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: regression? in SA 1.0.0b3?

2015-04-01 Thread Mike Bayer


On 4/1/15 4:47 PM, Jonathon Nelson wrote:
> I spoke too soon. Adding the .close() did not change things.

well adding tests for these is hard since I can't really get the right
sequence of events to occur here, but the fix is just to put a list()
around the call which I've done, so try out master and let me know that
that resolves.



>
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] PostgreSQL UNION with ORDER or LIMIT parenthesis patch

2015-04-02 Thread Mike Bayer
haha it's not resolved :)  UNIONs are really tough, as are SQLite's
choice of quirks in this area. 

On 4/1/15 11:27 PM, Charles Leifer wrote:
> Funnily enough I ran into this particular question regarding Peewee
> ORM and, googling "Postgresql UNION parentheses", I found this post. I
> ran into the same thing where PG likes the parentheses, but SQLite
> doesn't. I know this is a very old post, but out of curiosity, how did
> you end up resolving this?
>
> On Thursday, July 5, 2012 at 4:35:51 PM UTC-5, Michael Bayer wrote:
>
>
> On Jul 5, 2012, at 4:57 PM, Sergey Shepelev wrote:
>
>> Problem: sa.union_all(*qs).order_by('y')  will generate invalid
>> SQL like this:
>>
>> SELECT ... ORDER BY x LIMIT 10 UNION ALL SELECT ... ORDER BY x
>> LIMIT 10 ORDER BY y
>>
>> Order in inner queries could be required for LIMIT or DISTINCT ON
>> (field).
>>
>> Solution: sqlalchemy should put all inner queries that contain
>> ORDER BY or LIMIT clauses into parenthesis, like this:
>>
>> SELECT ... UNION ALL (SELECT ... ORDER BY x LIMIT 10) ORDER BY y
>>
>>
>> Test:
>> def test_with_order(self):
>> q = sa.select([2]).order_by('1')
>> union = sa.union(q, q).limit(3)
>> db.postgresql.execute(union)
>> self.assertEqual(str(union), "(SELECT 2 ORDER BY 1) UNION
>> (SELECT 2 ORDER BY 1) LIMIT 3")
>>
>> Monkey patch at https://gist.github.com/3056292
>> 
>
> This is tricky because the SQL expression language doesn't like to
> make guesses like that, that is, decides about the structure of
> the query based on things inside of select() constructs.  The ORM
> does that kind of thing more acceptably, but it isn't covering
> this case either at the moment.
>
> The above SQL is not accepted on some backends - that is, it fails
> on SQLite at least.   Odd structures like that tend to not work on
> several of the more quirky backends, common culprits are Oracle,
> Firebird, SQL Server, older MySQL versions.  
>
> So to make this "magic" here is to make the select() construct
> behave differently on the PG backend, or maybe just not on the
>  SQLite backend, but this is tricky to decide without knowing the
> true compatibility here and what's going to break for whom if we
> make a change like that - the SQL expression language tends to
> want you to manually apply structural conventions right now (such
> as, if you say select().select(), there's no "alias" generated
> automatically, things like that).
>
> I would have expected alias() to work here, but it doesn't.  so
> it's on deck to work on why that is at some point.
>
> The point is right now you can force the parens using
> self_group(), and I'd be more comfortable if you stuck with that
> for this particular case right now, though I may change my mind
> about handling this case in the future:
>
> q = select([2]).limit(5).order_by('1').self_group()
> union = union(q, q).limit(3)
> print union
>
> i put up http://www.sqlalchemy.org/trac/ticket/2528
> , where the goal is to
> test the compatibility of this kind of query across the most
> common backends and see what the best approach might be.   I'd
> probably go as far as alias() rending parenthesis, though (that
> is, self_group() will likely remain exactly what you want here).
>
>
>
>
>
>
>
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Nested transaction rollback does not undo changes to instances

2015-04-02 Thread Mike Bayer


On 4/2/15 10:01 AM, Chris Wilson wrote:
> Dear SQLAlchemy developers,
>
> I think I've found a problem with SQLAlchemy not rolling back changes
> to instances that are committed in an inner nested transaction, when
> the outer nested transaction rolls back.
>
> The manual says:
>
> When begin_nested()
> 
> 
> is called, a flush()
> 
> 
> is unconditionally issued (regardless of the autoflush setting).
> This is so that when a rollback()
> 
> 
> occurs, the full state of the session is expired, thus causing all
> subsequent attribute/instance access to reference the full state
> of the Session
> 
> 
> right before begin_nested()
> 
> was
> called.
>
>
> So I think that if we make a change to an instance, inside a
> transaction, and then rollback, we should see the old values in any
> instances, as if the transaction never happened. And indeed this
> appears to work for simple cases:
>
> from sqlalchemy import Column, Integer, Boolean
> from sqlalchemy import create_engine
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy.orm import sessionmaker
>
> Base = declarative_base()
> class Test(Base):
> __tablename__ = 'test'
> id = Column(Integer, primary_key=True)
> value = Column(Boolean)
> 
> engine = create_engine('sqlite:///sqlalchemy_example.db')
>
> Base.metadata.create_all(engine)
>
> DBSession = sessionmaker(bind=engine)
> session = DBSession(autocommit=True)
>
> session.begin()
> test_data = Test(value=False)
> session.add(test_data)
> session.commit()
> print test_data.value, "in the beginning"
>
> try:
> with session.begin():
> # with session.begin_nested():
> test_data.value = True
> print test_data.value, "before rollback"
> raise ValueError("force a rollback")
> except ValueError as e:
> print "ValueError caught: {}".format(e)
> print test_data.value, "after rollback"
>
>
> Which, as expected, resets value from True to False during rollback():
>
> False in the beginning
>
> True before rollback
>
> ValueError caught: force a rollback
>
> False after rollback
>
>
> Note: this example doesn't work with my SQLite install because it
> doesn't seem to support savepoints properly, even though it's
> documented to do so. I used postgresql to actually run these tests,
> and created the table as follows:
>
> CREATE TABLE test (id SERIAL NOT NULL PRIMARY KEY, value BOOLEAN);
> GRANT ALL ON TABLE test TO standard;
> GRANT ALL ON TABLE test_id_seq TO standard;
>
>
> It even works if you rollback a nested transaction:
>
> try:
> with session.begin():
> try:
> with session.begin_nested():
> test_data.value = True
> print test_data.value, "after nested commit,
> before nested rollback"
> raise ValueError("force a rollback")
> except ValueError as e:
> print "ValueError caught: {}".format(e)
> 
> print test_data.value, "after nested rollback"
> assert not test_data.value, "should have been rolled back"
> 
> assert session.transaction is not None
> raise ValueError("force a rollback")
> except ValueError as e:
> print "ValueError caught: {}".format(e)
> print test_data.value, "after outer rollback"
>
>
> However, it does NOT work if you roll back a nested transaction that
> has a committed nested transaction inside it (differences highlighted):
>
> try:
> with session.begin():
> try:
> with session.begin_nested():
> with session.begin_nested():
> test_data.value = True
> print test_data.value, "after nested commit,
> before nested rollback"
> raise ValueError("force a rollback")
> except ValueError as e:
> print "ValueError caught: {}".format(e)
> 
> print test_data.value, "after nested rollback"
> # assert not test_data.value, "should have been rolled
> back; this assertion fails if enabled"
> 
> assert session.transaction is not None
> raise ValueError("force a rollback")
> except ValueError as e:
>

Re: [sqlalchemy] Nested transaction rollback does not undo changes to instances

2015-04-02 Thread Mike Bayer


On 4/2/15 11:54 AM, Mike Bayer wrote:
>
> it's a bug, and I've captured the origin, diagnosis and probable
> solution here:
> https://bitbucket.org/zzzeek/sqlalchemy/issue/3352/nested-begin_nested-blocks-dont-track

this issue is fixed for 0.9.10 and 1.0.0b5, you can test now using
either latest master or the rel_0_9 branch.


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] 0.9.7: Intermittently find a dead weakref to a listens_for function while checking _stored_in_collection of a registry

2015-04-02 Thread Mike Bayer


On 4/2/15 4:28 PM, Evan James wrote:
> Hi folks,
>
> While running my test suite, I hit an issue with the following stack
> trace:
>
> |
> ERROR at setup of test_track_before_delete
>  
>
> request
> =>,engine
> =Engine(sqlite:///test.db)
>
>
>  
>
> @pytest.fixture
>
> defdatabase_session(request,engine):
>
> connection =engine.connect()
>
> trans =connection.begin()
>
>  
>
>
> meta.Session=scoped_session(sessionmaker(autocommit=False,autoflush=True,bind=connection))
>
> register_session_listeners(meta.Session)
>
>  
>
> meta.Session.begin_nested()
>
> >  @event.listens_for(meta.Session,"after_transaction_end")
>
> defrestart_savepoint(session,transaction):
>
> iftransaction.nested andnottransaction._parent.nested:
>
>
>  
>
> automated_tests/conftest.py:52: 
>
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/api.py:94:indecorate
>
>
> listen(target,identifier,fn,*args,**kw)
>
> ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/api.py:63:inlisten
>
>
> _event_key(target,identifier,fn).listen(*args,**kw)
>
> ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/registry.py:187:inlisten
>
>
> self.dispatch_target.dispatch._listen(self,*args,**kw)
>
> ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/base.py:184:in_listen
>
>
> event_key.base_listen(propagate=propagate,insert=insert,named=named)
>
> ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/registry.py:226:inbase_listen
>
>
> for_modify(target.dispatch).append(self,propagate)
>
> ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/attr.py:118:inappend
>
>
> registry._stored_in_collection(event_key,self)
>
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
>  
>
> event_key = 0x111c6b3d0>,owner = objectat 0x10eac8590>
>
>
>  
>
> def_stored_in_collection(event_key,owner):
>
> key =event_key._key
>
>  
>
> dispatch_reg =_key_to_collection[key]
>
>  
>
> owner_ref =owner.ref
>
> listen_ref =weakref.ref(event_key._listen_fn)
>
>  
>
> ifowner_ref indispatch_reg:
>
> >  assertdispatch_reg[owner_ref]==listen_ref
>
> E   assert== 0x111cc2af8;to 'function'at 0x111c7f668(restart_savepoint)>
>
>
>  
>
> ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/registry.py:74:AssertionError
> |
>
>
> While running the @event.listens_for decorator in the setup of a test
> fixture, SQLAlchemy throws an assertion because the ref it has in the
> dispatch registry isn't identical to the ref to the listener function.
>  We're on SQLAlchemy 0.9.7, pytest 2.6.4.  Note that the test suite is
> setting up nested transactions on SQLite; we are using the recommended
> workaround from SQLAlchemy documentation to take control of
> transactional integration from pysqlite.
>
>
> Since there's an assertion in the code in _stored_in_collection(), I
> assume that there's some expectation that the refs might not match; am
> I doing something wrong in my setup which this assertion is meant to
> catch?  I've only seen this error once (while tracking down a
> different nondeterministic error in my own app's code), so I can't
> provide much more information than this, but the portion of test
> fixture code seen above in the stack trace is basically the entire
> reproduction case.  This fixture runs before every test in my suite,
> but I've only seen an error once across a large number of runs, so the
> error is *very* intermittent.  Because of that, I'm not worried too
> much about the error itself, but I thought I should post it here in
> case it's a symptom of something I should be worrying about.
>
this was issue https://bitbucket.org/zzzeek/sqlalchemy/issue/3199 and
has been fixed as of 0.9.8.





>
> Thanks,
>
> Evan James
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For mor

[sqlalchemy] SQLAlchemy 1.0.0b5 Released

2015-04-03 Thread Mike Bayer
SQLAlchemy release 1.0.0b5 is now available.

This release is yet another set of fixes for issues reported
by beta testers.  At this point, 1.0.0 is ready to go and should
be released very soon.

In preparation for 1.0.0, production installations that haven't yet been
tested in the 1.0 series should be making sure that their requirements
files are capped at 0.9.99, to avoid surprise upgrades.

Changelog for 1.0.0b5 is at:

http://www.sqlalchemy.org/changelog/CHANGES_1_0_0b5


SQLAlchemy 1.0.0b5 is available on the Download Page:

http://www.sqlalchemy.org/download.html



-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] generating safe sqlalchemy.sql.text elements?

2015-04-03 Thread Mike Bayer


On 4/3/15 3:10 PM, Jonathan Vanasco wrote:
> I have an include file that generates a handful of timestamp clauses:
>
> def sql_now():
> return sqlalchemy.sql.text("(CURRENT_TIMESTAMP AT TIME ZONE 'UTC')")
>
> def sql_now_minus_10_minutes():
> return sqlalchemy.sql.text("(CURRENT_TIMESTAMP AT TIME ZONE 'UTC' -
> INTERVAL '10 MINUTES')")
>
> One of them needs to be driven by a configuration value :
>
> def sql_now_minus_interval(interval):
> return sqlalchemy.sql.text("(CURRENT_TIMESTAMP AT TIME ZONE 'UTC' -
> INTERVAL '%s')" % lib.constants.RATELIMIT_TIMEOUT_A)
>
>
> Is there anything I can do to protect myself from accidental sql
> injection ?  This is all first-party code, so I'm not worried about a
> "little bobby tables" scenario, but am concerned with bad text getting
> named in the constant and breaking a query.

is lib.constants.RATELIMIT_TIMEOUT_A a source of untrusted user input? 
If not, then it's OK.

Though in this specific case I would think you could use a bound
parameter just as well (text() supports these via :param and .bindparams()).


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Redshift - approximate count

2015-04-03 Thread Mike Bayer
I think you can get this with select.prefix_with():

http://docs.sqlalchemy.org/en/rel_0_9/core/selectable.html?highlight=prefix#sqlalchemy.sql.expression.Select.prefix_with



On 4/3/15 8:22 PM, Kristi Tsukida wrote:
> Redshift supports a modifier "approximate" as in
>
> SELECT APPROXIMATE COUNT(*)
> FROM my_table;
>
> Is there a way to get this with func.count()?
>
> Thanks
> -Kristi
>
> http://docs.aws.amazon.com/redshift/latest/dg/r_COUNT.html
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] create_engine() creates sqlite database - how to disable / prevent this?

2015-04-04 Thread Mike Bayer


On 4/4/15 4:38 AM, Duke Dougal wrote:
> Is there a way to have create_engine NOT create a sqlite database in
> the event that one does not exist?
I think you need to reword this question.   Right now it says, "Is there
a way to call a function X() and have it NOT do X?"   Doesn't make much
sense.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] error in query many to many

2015-04-04 Thread Mike Bayer


On 4/4/15 11:27 AM, biohege...@gmail.com wrote:
> Hi,
>
> I get a strange error when querying a many-to-many relationship
> (Protein - Omim / protein - omim).
> sqlalchemy 0.9.9
>
> o=Omim.query.filter(Omim.acc=="157140").one() # this is OK
> o.proteins # this gives an error
>
> 2015-04-04 17:16:09,984 INFO sqlalchemy.engine.base.Engine SELECT
> protein.id AS protein_id, protein.acc AS protein_acc,
> protein.entry_name AS protein_entry_name, protein.gene_name AS
> protein_gene_name, protein.is_human AS protein_is_human,
> protein.is_membrane AS protein_is_membrane, protein.is_reviewed AS
> protein_is_reviewed, protein.sprot_str AS protein_sprot_str
> FROM protein, omim_assoc
> WHERE %s = omim_assoc.omim_id AND protein.id = omim_assoc.protein_id
> 2015-04-04 17:16:09,985 INFO sqlalchemy.engine.base.Engine (16083L,)
> Traceback (most recent call last):
> [skipped]
>   File
> "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/strategies.py",
> line 169, in fetch_col
> dict_[key] = row[col]
> ValueError: int_to_boolean only accepts None, 0 or 1
>
> 
> However, when querying this in mysql client:
> SELECT protein.id AS protein_id FROM protein, omim_assoc WHERE 16083 =
> omim_assoc.omim_id AND protein.id = omim_assoc.protein_id;
> No problem, the protein is found.
>
> Moreover, if I query Omim with a different ac, it works.
>
> 
> I have no idea how to debug...

you are using the Boolean() type in your table metadata for the column
called "acc" and you need to change it to be the type "Integer()";
additionally, in your query, you need to refer to this value as an
integer, 'Omin.acc==157140', without quotes around the numeric value.



>
> Thanks for your help in advance,
> Tamas
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy delay reconnecting

2015-04-04 Thread Mike Bayer


On 4/4/15 11:42 AM, Henrique Fleury wrote:
> When the database restarts my services are flooding the database
> connection. I wonder if in SQLAlchemy has a parameter that step
> create_engine () it does a delay between an attempt and another
> connection. I need him to try to connect if you can not it wait X time
> to make a new attempt.
I recently had to do this for some benchmarking.   There is no built-in
facility for this as the event system doesn't provide this hook as of
yet (maybe it should); however you can decorate the "creator" after the
fact to produce this effect:

import time

engine = create_engine("...")
_creator = engine.pool._creator

def creator():
time.sleep(.5)
return _creator()
engine.pool._creator = creator





> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] error in query many to many

2015-04-04 Thread Mike Bayer


On 4/4/15 12:14 PM, Mike Bayer wrote:
>
>
> On 4/4/15 11:27 AM, biohege...@gmail.com wrote:
>> Hi,
>>
>> I get a strange error when querying a many-to-many relationship
>> (Protein - Omim / protein - omim).
>> sqlalchemy 0.9.9
>>
>> o=Omim.query.filter(Omim.acc=="157140").one() # this is OK
>> o.proteins # this gives an error
>>
>> 2015-04-04 17:16:09,984 INFO sqlalchemy.engine.base.Engine SELECT
>> protein.id AS protein_id, protein.acc AS protein_acc,
>> protein.entry_name AS protein_entry_name, protein.gene_name AS
>> protein_gene_name, protein.is_human AS protein_is_human,
>> protein.is_membrane AS protein_is_membrane, protein.is_reviewed AS
>> protein_is_reviewed, protein.sprot_str AS protein_sprot_str
>> FROM protein, omim_assoc
>> WHERE %s = omim_assoc.omim_id AND protein.id = omim_assoc.protein_id
>> 2015-04-04 17:16:09,985 INFO sqlalchemy.engine.base.Engine (16083L,)
>> Traceback (most recent call last):
>> [skipped]
>>   File
>> "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/strategies.py",
>> line 169, in fetch_col
>> dict_[key] = row[col]
>> ValueError: int_to_boolean only accepts None, 0 or 1
>>
>> 
>> However, when querying this in mysql client:
>> SELECT protein.id AS protein_id FROM protein, omim_assoc WHERE 16083
>> = omim_assoc.omim_id AND protein.id = omim_assoc.protein_id;
>> No problem, the protein is found.
>>
>> Moreover, if I query Omim with a different ac, it works.
>>
>> 
>> I have no idea how to debug...
>
> you are using the Boolean() type in your table metadata for the column
> called "acc" and you need to change it to be the type "Integer()";
> additionally, in your query, you need to refer to this value as an
> integer, 'Omin.acc==157140', without quotes around the numeric value.

correction, you are using the Boolean() type somewhere in your model to
a column being referred to by the omim_assoc.omin_id column, this needs
to be Integer().


>
>
>
>>
>> Thanks for your help in advance,
>> Tamas
>> -- 
>> You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to sqlalchemy+unsubscr...@googlegroups.com
>> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> <mailto:sqlalchemy@googlegroups.com>.
>> Visit this group at http://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
> To post to this group, send email to sqlalchemy@googlegroups.com
> <mailto:sqlalchemy@googlegroups.com>.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] create_engine() creates sqlite database - how to disable / prevent this?

2015-04-04 Thread Mike Bayer


On 4/4/15 4:10 PM, Duke Dougal wrote:
> OK I'll try again, apologies for not being clear.
>
> I want to connect to a Sqlite database if it exists, but if it does
> not exist, I want to execute my custom database creation code., which
> looks like this:
then use Python to check for the file:

import os
if not os.path.exists("myfile.db"):
# do thing





>
> def setupArchiveTables(db):
>
> import sqlite3
> conn = sqlite3.connect('/home/ubuntu/archives.db')
> with open('serve_spec_archives/schema.sql') as f:
> conn.executescript(f.read())
> conn.commit()
>
> And then after the database has been created, try again to connect to
> it so I can proceed with using it via SQLAlchemy.
>
> Is that possible?
>
> thanks
>
>
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to query postgresql JSONB columns?

2015-04-04 Thread Mike Bayer


On 4/4/15 7:22 PM, Daniel Kerkow wrote:
> Hi,
> I am new to SQLAlchemy, doing my first steps with Flask.
> I have the following model using JSONB data type in PostgreSQL.
>
> The JSON data looks like
>
> |
> {'key1':'value1','key2':'value2'}
> |
>
> The Docs are relatively sparse regarding this topic.

the cases you ask for are all described at :

http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSON

> How can I query the properties column for containing a specific key or
> key=value combination?

select([table]).where(table.c.col['key'] == 'somevalue')
> How can I update these?

update([table]).values({table.c.col['key'] = 'value'})

> Can I get a list of unique keys in all records?

i dunno.  Whats the Postgresql query you'd like to emit?



>
> Any help is welcome!
>
> Daniel
>
> |
> classRecord(Base):
>
> """represents single stratigraphic units"""
>
> # ID of corresponding site:
> site_id
> =db.Column(db.Integer,db.ForeignKey('public.sites.id'))
>
> # depth values:
> depth   =db.Column(db.Numeric,nullable=True)
> upper_boundary  =db.Column(db.Numeric,nullable=True)
> lower_boundary  =db.Column(db.Numeric,nullable=True)
>
> # stratigraphic properties, represented as key/value store
> properties  =db.Column(JSONB)
> |
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to query postgresql JSONB columns?

2015-04-04 Thread Mike Bayer


On 4/4/15 7:29 PM, Mike Bayer wrote:
>
>
> On 4/4/15 7:22 PM, Daniel Kerkow wrote:
>> Hi,
>> I am new to SQLAlchemy, doing my first steps with Flask.
>> I have the following model using JSONB data type in PostgreSQL.
>>
>> The JSON data looks like
>>
>> |
>> {'key1':'value1','key2':'value2'}
>> |
>>
>> The Docs are relatively sparse regarding this topic.
>
> the cases you ask for are all described at :
>
> http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSON
>
>> How can I query the properties column for containing a specific key
>> or key=value combination?
>
> select([table]).where(table.c.col['key'] == 'somevalue')
>> How can I update these?
>
> update([table]).values({table.c.col['key'] = 'value'})

sorry, like this:

update([table]).values({table.c.col['key']: 'value'})


from sqlalchemy import Table, Column, MetaData

from sqlalchemy.dialects import postgresql

m = MetaData()
t = Table('t', m, Column('x', postgresql.JSONB))

print t.update().values({t.c.x['data']:
'foo'}).compile(dialect=postgresql.dialect())

output:

UPDATE t SET x -> %(x_1)s=%(param_1)s




>
>> Can I get a list of unique keys in all records?
>
> i dunno.  Whats the Postgresql query you'd like to emit?
>
>
>
>>
>> Any help is welcome!
>>
>> Daniel
>>
>> |
>> classRecord(Base):
>>
>> """represents single stratigraphic units"""
>>
>> # ID of corresponding site:
>> site_id
>> =db.Column(db.Integer,db.ForeignKey('public.sites.id'))
>>
>> # depth values:
>> depth   =db.Column(db.Numeric,nullable=True)
>> upper_boundary  =db.Column(db.Numeric,nullable=True)
>> lower_boundary  =db.Column(db.Numeric,nullable=True)
>>
>> # stratigraphic properties, represented as key/value store
>> properties  =db.Column(JSONB)
>> |
>>
>> -- 
>> You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to sqlalchemy+unsubscr...@googlegroups.com
>> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> <mailto:sqlalchemy@googlegroups.com>.
>> Visit this group at http://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to query postgresql JSONB columns?

2015-04-04 Thread Mike Bayer


On 4/4/15 7:47 PM, Daniel Kerkow wrote:
>
> 2015-04-05 1:29 GMT+02:00 Mike Bayer  <mailto:mike...@zzzcomputing.com>>:
>
>
>
> On 4/4/15 7:22 PM, Daniel Kerkow wrote:
>> Hi,
>> I am new to SQLAlchemy, doing my first steps with Flask.
>> I have the following model using JSONB data type in PostgreSQL.
>>
>> The JSON data looks like
>>
>> |
>> {'key1':'value1','key2':'value2'}
>> |
>>
>> The Docs are relatively sparse regarding this topic.
>
> the cases you ask for are all described at :
>
> 
> http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSON
>
>> How can I query the properties column for containing a specific
>> key or key=value combination?
>
> select([table]).where(table.c.col['key'] == 'somevalue')
>
>
> Seems I am doing something wrong here:
>
> select(Record).where(Record.c.properties['key']  == 'value')
>
> TypeError: '_BoundDeclarativeMeta' object is not iterable
>
> Did I miss to import something? Any preparations?


First off, if you are using the select() construct, it accepts a list of
things to SELECT from, so select([Record]).

Secondly, "Record" looks a whole lot like a declarative ORM class due to
the "BoundDeclarativeMeta" line; there's a difference between using
table metadata and an ORM class, in this case that the ORM class doesn't
use the .c. attribute.   If you are using ORM classes you'd likely want
to use session.query() which is generally more appropriate.



 



>  
>
>
>> How can I update these?
>
> update([table]).values({table.c.col['key'] = 'value'})
>
>> Can I get a list of unique keys in all records?
>
> i dunno.  Whats the Postgresql query you'd like to emit?
>
>
> In raw psql, the following works:
> select *
> from records
> where properties->>'color' = 'black';
>
>  
>
>
>
>>
>> Any help is welcome!
>>
>> Daniel
>>
>> |
>> classRecord(Base):
>>
>> """represents single stratigraphic units"""
>>
>> # ID of corresponding site:
>> site_id
>> =db.Column(db.Integer,db.ForeignKey('public.sites.id
>> <http://public.sites.id>'))
>>
>> # depth values:
>> depth   =db.Column(db.Numeric,nullable=True)
>> upper_boundary  =db.Column(db.Numeric,nullable=True)
>> lower_boundary  =db.Column(db.Numeric,nullable=True)
>>
>> # stratigraphic properties, represented as key/value store
>> properties  =db.Column(JSONB)
>> |
>>
>> -- 
>> You received this message because you are subscribed to the
>> Google Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to sqlalchemy+unsubscr...@googlegroups.com
>> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> <mailto:sqlalchemy@googlegroups.com>.
>> Visit this group at http://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>
> -- 
> You received this message because you are subscribed to a topic in
> the Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/vk6L6152vE8/unsubscribe.
> To unsubscribe from this group and all its topics, send an email
> to sqlalchemy+unsubscr...@googlegroups.com
> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
> To post to this group, send email to sqlalchemy@googlegroups.com
> <mailto:sqlalchemy@googlegroups.com>.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
> To post to this group, send email to sqlalchemy@googlegroups.com
> <mailto:sqlalchemy@googlegroups.com>.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Deleting object attributes to avoid merge comparison no longer works in 1.0

2015-04-04 Thread Mike Bayer


On 4/5/15 12:32 AM, Russ wrote:
> I have some buried-in-the-codebase-since-0.7 upsert code that uses
> `merge`.  In order to avoid certain attributes from being used in the
> merge comparison, the attributes were deleted using delattr..
>
> The code looks something like this:
>
> db_obj = sess.query(obj_type).filter_by(**filters).one()
> pk = obj.__mapper__.primary_key[0].name
> setattr(obj, pk, getattr(db_obj, pk))  #convince merge they're comparable
> for attr in merge_ignores:
> #See http://goo.gl/oBbpp for why we delete the attribute to avoid
> comparison
> delattr(obj, attr)
> obj = sess.merge(obj)
>
> I've recently been trying 1.0.0b4, and now 1.0.0b5 and this code no
> longer works.
>
> Specifically, the code throws a KeyError on the delattr line. Here's
> the clipped traceback:
>
> File
> "/home/russ/code/bitbucket/sqlalchemy/lib/sqlalchemy/orm/attributes.py",
> line 227, in __delete__
>   self.impl.delete(instance_state(instance), instance_dict(instance))
> File
> "/home/russ/code/bitbucket/sqlalchemy/lib/sqlalchemy/orm/attributes.py",
> line 738, in delete
>   del dict_[self.key]
>
> KeyError: 'date_added_log'
>
> The attribute *definitely* exists at the time (albeit with a value of
> `None`, but it's there).  I doubt it matters, but obj is an instance
> of declarative_base'd class.  
>
> I see that __slots__ has been introduced to
> `ScalarObjectAttributeImpl`, which is where the is tanking.  __slots__
> seems like a likely candidate here, but I have to dig further.
>
> For now I've reverted back to 0.9.9 so the code works again.
>
> Is there some less sketchy way (that works in 1.0!) to force merge to
> not look at certain fields?
I absolutely need a succinct test case here.   __slots__ does not refer
to your object's state, only the state of internal structures with fixed
attributes.Looking shortly it seems as simple as there's an
attribute you expected to be loaded that didn't actually load.For
scalar attrs, only if perhaps the attr were marked as "deferred" might
it not be loaded in quite as many cases as it was in 0.9, as a deferred
attr won't load unless explicitly undeferred now.




>
> Russ
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: error in query many to many

2015-04-04 Thread Mike Bayer


On 4/5/15 12:54 AM, biohege...@gmail.com wrote:
> Thanks for your input! It helped my to find a bug in my code.
> I inserted 0 for false, 1 and 2 for true.
>
> However, inserting a "bad" value for boolean using SQLAlchemy should
> also result in an exception and not only selecting a boolean with a
> "bad" value.

The Boolean type does this by using a CHECK constraint. Your
database was probably built by some other means than using SQLAlchemy's
schema system so the CHECK constraint is missing.   If you'd like a
Python-side check you can use a TypeDecorator around Boolean to verify
the values coming in.





>
> Thanks again,
> Tamas
>
> On Saturday, April 4, 2015 at 5:27:28 PM UTC+2, biohe...@gmail.com wrote:
>
> Hi,
>
> I get a strange error when querying a many-to-many relationship
> (Protein - Omim / protein - omim).
> sqlalchemy 0.9.9
>
> o=Omim.query.filter(Omim.acc=="157140").one() # this is OK
> o.proteins # this gives an error
>
> 2015-04-04 17:16:09,984 INFO sqlalchemy.engine.base.Engine SELECT
> protein.id  AS protein_id, protein.acc AS
> protein_acc, protein.entry_name AS protein_entry_name,
> protein.gene_name AS protein_gene_name, protein.is_human AS
> protein_is_human, protein.is_membrane AS protein_is_membrane,
> protein.is_reviewed AS protein_is_reviewed, protein.sprot_str AS
> protein_sprot_str
> FROM protein, omim_assoc
> WHERE %s = omim_assoc.omim_id AND protein.id  =
> omim_assoc.protein_id
> 2015-04-04 17:16:09,985 INFO sqlalchemy.engine.base.Engine (16083L,)
> Traceback (most recent call last):
> [skipped]
>   File
> 
> "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/strategies.py",
> line 169, in fetch_col
> dict_[key] = row[col]
> ValueError: int_to_boolean only accepts None, 0 or 1
>
> 
> However, when querying this in mysql client:
> SELECT protein.id  AS protein_id FROM protein,
> omim_assoc WHERE 16083 = omim_assoc.omim_id AND protein.id
>  = omim_assoc.protein_id;
> No problem, the protein is found.
>
> Moreover, if I query Omim with a different ac, it works.
>
> 
> I have no idea how to debug...
>
> Thanks for your help in advance,
> Tamas
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Eager loading of filtered children

2015-04-05 Thread Mike Bayer


On 4/5/15 8:46 AM, Yegor Roganov wrote:
> Given the User-Addresses schema as in the docs, I need to select all
> users along with only those email addresses which end with "@gmail.com".
> The question is how do I group these filtered emails by user?
> Apparently I can use `contains_eager` to write something like this:
>
>
> session.query(User).outerjoin(User.addresses).filter(Address.email_address.like('%@gmail.com'))\
> .options(contains_eager(User.addresses))
> But having the filtered email addresses on the `addresses` attribute
> is rather confusing.

I agree, this is not the typical case you'd use it.
>
> I'd like to either:
>   1. iterate a query results in a clear way:
>  for user, gmail_addrs in session.query(???)
sure, just say session.query(User, Address).outerjoin(...) and that's
what you get.But this is a product result; if User "A" has three
addresses, you get User "A" back three times, one for each address.  
This is just plain SQL behavior.


>   2. get the filtered children attached to a different field (smth
> like `contains_eager(User.addresses, attach_to='_gmail_addrs')`)

right, a "nested" result but not attached to the User.addresses
attribute.   There was a database that did this, and I even wrote a
SQLAlchemy ORM driver for it, but unfortunately it's called FoundationDB
and it doesn't seem to exist any more :).   

There isn't a specific option on contains_eager() like that because the
mechanics of mapped attributes on mapped classes is not really
"lightweight" enough for that operation to be appropriate in that way;
it could be possible, that something like contains_eager() loading could
apply things to simple instance-only collections that aren't
relationships, but this isn't something built in right now.
 
However, if you want to group the User, Address objects, use plain
Python.In this case itertools.groupby will give you what you need as
a one liner, assuming the User rows for a given identifier are
contiguous (which is always the case for simple joins, or an order_by()
guarantees it):

import itertools
q = session.query(User,
Address).outerjoin(User.addresses).filter(Address.email.like('%@gmail.com')).order_by(User.id)

for user, collection in itertools.groupby(q, key=lambda row: row[0]):
addresses = (row[1] for row in collection if row is not None)
   












>
> Thanks.
> -- 
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Python 3 and mysql

2015-04-05 Thread Mike Bayer


On 4/5/15 1:49 PM, Thomas Tanner wrote:
> Hello,
>
> which Python 3 compatible MySQL driver is recommended for SA?
> I couldn't find a single package which both installs cleanly with pip
> and passes the SA test suite. I've tried mysqlclient, mysqlconnector,
> cymysql, oursql on OSX.

Right now the driver that has the quickest release / support cycle is
pymysql.   It should be passing 100% of the test suite as we have it
passing on CI, then again we should also have 100% coverage for
mysqlconnector as well, though it might have more skips. 


>
> cheers,
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLite Multithreading issue

2015-04-06 Thread Mike Bayer



On 4/6/15 7:12 PM, arash afshar wrote:
I am using sqlalchemy version 0.9.9 with SQLite backend which stores 
data on a file on disk.
Moreover, I am using ORM model for all my queries and handle sessions 
using scoped_session to work with multiple threads.
The application is a desktop application which should be platform 
independent.


The problem is that I receive an error which I cannot debug or find 
its origin!


On some operating systems (Windows 7 and Ubuntu 14.04) I get a 
"single" error message:

  'no handlers could be found for logger "sqlalchemy.pool.nullpool"'


that's a warning, not an error.  the pool wants to say that it had 
trouble shutting down a connection.


while in another OS (CentOS), I get an error that says a session 
created in one thread cannot be used in another thread 
That's not an error message that SQLAlchemy generates, that's more of a 
SQLite issue.  The exact text and stack trace would help.


These error messages are shown only once but they does "not" hinder 
the normal operation of my python program.
that's a hint that this trace is occurring during Python garbage 
collection, which is why it just warns.




I have gone over the code many times and I am certain that I am not 
passing the session to another thread. Any ideas why this happens?
A stack trace would show more but it sounds like SQLite connections, 
cursors, or transactions are being left open and then collected by 
Python's garbage collector asynchronously (e.g. in a different thread), 
generating warnings like these.




Here is the workflow of the application:
1) I have a main thread which calls the scoped_session and stores its 
result in a module level variable called "ScopedSession"

2) The main thread creates two listener threads.
3) Each of these two listener threads, spawn multiple worker threads 
as needed.


4) In any of the above threads (main, listener, and workers) whenever 
I need to get a session, I call ScopedSession()
5) At the end of worker threads, I manually call to 
ScopedSession().commit() or ScopedSession().rollback() followed by 
ScopedSession().close()
The worker threads need to ensure that they use a Session in which they 
have also created and that they close it in that same thread, since the 
Pysqlite module here is forcing you to use a certain connection only in 
one thread.





--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Test test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close fails

2015-04-07 Thread Mike Bayer
it's a test that very seldom fails in some cases no matter what, as it's 
checking reference counts on objects that are expected to be garbage 
collected without any cycles.  A failure of this test is not strictly a 
"bug", it just refers to connections or cursors that are not collected 
immediately.   if it is persistent and only occurs with pg8000 then 
there might be some cursor cleanup issue going on.



On 4/7/15 3:22 PM, Tony Locke wrote:

Hi, I found the following SQLAlchemy test fails:

FAIL test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close

with the stack trace:

__ QueuePoolTest.test_queuepool_close 
___

Traceback (most recent call last):
  File "/home/tlocke/sqlalchemy/test/engine/test_pool.py", line 805, 
in test_queuepool_close

self._do_testqueuepool(useclose=True)
  File "/home/tlocke/sqlalchemy/test/engine/test_pool.py", line 854, 
in _do_testqueuepool

assert not pool._refs
AssertionError: assert not set([object at 0x7fc03cf1f4a8>, at 0x7fc03cf1f710>, 0x7fc03cf1f5c0>])
 +  where set([0x7fc03cf1f4a8>, 0x7fc03cf1f710>, 0x7fc03cf1f5c0>]) = pool._refs


The versions I'm using are:

platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0

I ran it against the tip of the master branch 
72329433e78e57c8589e4797df523bb598190b64


and the command I ran was a straightforward:

py.test test/engine/test_pool.py::QueuePoolTest

I'm sure I'm doing something really obviously wrong, but I'm not sure 
what, so I'd be grateful if anyone can help. Btw, it was the only test 
that failed when I ran the entire suite.


Thanks,

Tony.
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Reverse delete orphan?

2015-04-07 Thread Mike Bayer



On 4/7/15 4:43 PM, Jonathan Gordon wrote:
How can I cascade delete from a many to one only when deleting the 
many means this one has no other children?


Imagine the following mapping:

class Container(Base):
  __tablename__ = 'container'
  pk = sa.Column(sa.Integer, primary_key=True)
  entries = relationship('Entry', backref='container', lazy='dynamic', 
cascade="all, delete, delete-orphan")


class Entry(Base):
__tablename__ = 'entry'
pk = sa.Column(sa.Integer, primary_key=True)
container_pk = sa.Column(sa.Integer, sa.ForeignKey('container.pk'),
tag_pk = sa.Column(sa.Integer, sa.ForeignKey('tag.pk'))

class Tag(Base):
  __tablename__ = 'tag'
  pk = sa.Column(sa.Integer, primary_key=True)
  description = sa.Column(sa.String, nullable=False)
  entries = relationship("Entry", backref='tag')

A Container has zero or more Entry objects. An Entry may be grouped 
with other Entry objects with a Tag. A Tag is never shared with Entry 
instances from more than one Container


If I delete a Container instance, all of its Entry instances get 
deleted via cascade. What I'd also like to happen is to cascade the 
delete to all of the Tags as well.
While there is a limited form of "many-to-one cascade" supported, it 
requires that Tag is only linked to a single Entry, which I'm gathering 
is not the case here.


Typically events are used to intercept when a flush occurs so that 
additional steps can be taken to delete the extra objects.


One event approach would be the before_flush() event, you search through 
session.deleted for all Container objects; within Container, locate all 
Tag entries with that Container as the ultimate parent and mark them all 
as deleted as well.   With the before_flush() approach, you can use the 
Session and your objects in the usual way.


Another event approach would be to use the before_delete event on the 
Container mapping.When the Container row is marked for deletion, 
perform a Core level DELETE on the given connection that deletes all Tag 
rows from the "tag" table.With the before_delete() approach, the 
event occurs deep within the flush process, so it's not safe to 
manipulate the state of the Session; instead, you emit SQL directly on 
the given connection.


Still a third approach is to run a simple query at the end of the flush, 
where you search for Tag objects that have no Entry objects associated, 
and delete them.   I came up with that for this stackoverflow answer: 
http://stackoverflow.com/questions/9234082/setting-delete-orphan-on-sqlalchemy-relationship-causes-assertionerror-this-att/9264556#9264556. 
You can probably just adapt that one.




Most of the documentation I've seen for cascading deletes talks about 
deleting parents cascading to their children. In this case, it seems 
like I want to delete the parent (Tag) if I'm the last child (Entry). 
I'd like to avoid putting the Container pk on the Tag since it's 
already on the Entry and that seems like it's not properly normalized.


My example seems close to the "Many-to-many orphan deletion" given in 
sqlalchemy-utils, except it's many-to-one.


http://sqlalchemy-utils.readthedocs.org/en/latest/listeners.html#many-to-many-orphan-deletion

Any ideas?








--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Test test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close fails

2015-04-07 Thread Mike Bayer


On 4/7/15 4:33 PM, Mike Bayer wrote:
it's a test that very seldom fails in some cases no matter what, as 
it's checking reference counts on objects that are expected to be 
garbage collected without any cycles.  A failure of this test is not 
strictly a "bug", it just refers to connections or cursors that are 
not collected immediately.   if it is persistent and only occurs with 
pg8000 then there might be some cursor cleanup issue going on.



On 4/7/15 3:22 PM, Tony Locke wrote:

Hi, I found the following SQLAlchemy test fails:

FAIL test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close

with the stack trace:

__ QueuePoolTest.test_queuepool_close 
___

Traceback (most recent call last):
  File "/home/tlocke/sqlalchemy/test/engine/test_pool.py", line 805, 
in test_queuepool_close

self._do_testqueuepool(useclose=True)
  File "/home/tlocke/sqlalchemy/test/engine/test_pool.py", line 854, 
in _do_testqueuepool

assert not pool._refs
AssertionError: assert not set([object at 0x7fc03cf1f4a8>, at 0x7fc03cf1f710>, 0x7fc03cf1f5c0>])
 +  where set([0x7fc03cf1f4a8>, 0x7fc03cf1f710>, 0x7fc03cf1f5c0>]) = pool._refs


The versions I'm using are:

platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0

I ran it against the tip of the master branch 
72329433e78e57c8589e4797df523bb598190b64


and the command I ran was a straightforward:

py.test test/engine/test_pool.py::QueuePoolTest

I'm sure I'm doing something really obviously wrong, but I'm not sure 
what, so I'd be grateful if anyone can help. Btw, it was the only 
test that failed when I ran the entire suite.


Thanks,

Tony.
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com 
<mailto:sqlalchemy@googlegroups.com>.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com 
<mailto:sqlalchemy@googlegroups.com>.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Test test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close fails

2015-04-07 Thread Mike Bayer



On 4/7/15 4:33 PM, Mike Bayer wrote:
it's a test that very seldom fails in some cases no matter what, as 
it's checking reference counts on objects that are expected to be 
garbage collected without any cycles.  A failure of this test is not 
strictly a "bug", it just refers to connections or cursors that are 
not collected immediately.   if it is persistent and only occurs with 
pg8000 then there might be some cursor cleanup issue going on.


44a9820b4e02f65b3884fa2c016efc has a fix which adds explicit cleanup to 
a few connection objects that are checked out and not closed, leading to 
the pool._refs collection to not be empty when that particular test 
starts.  This is backported to 0.9 as well.









On 4/7/15 3:22 PM, Tony Locke wrote:

Hi, I found the following SQLAlchemy test fails:

FAIL test/engine/test_pool.py::QueuePoolTest::()::test_queuepool_close

with the stack trace:

__ QueuePoolTest.test_queuepool_close 
___

Traceback (most recent call last):
  File "/home/tlocke/sqlalchemy/test/engine/test_pool.py", line 805, 
in test_queuepool_close

self._do_testqueuepool(useclose=True)
  File "/home/tlocke/sqlalchemy/test/engine/test_pool.py", line 854, 
in _do_testqueuepool

assert not pool._refs
AssertionError: assert not set([object at 0x7fc03cf1f4a8>, at 0x7fc03cf1f710>, 0x7fc03cf1f5c0>])
 +  where set([0x7fc03cf1f4a8>, 0x7fc03cf1f710>, 0x7fc03cf1f5c0>]) = pool._refs


The versions I'm using are:

platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.7.0

I ran it against the tip of the master branch 
72329433e78e57c8589e4797df523bb598190b64


and the command I ran was a straightforward:

py.test test/engine/test_pool.py::QueuePoolTest

I'm sure I'm doing something really obviously wrong, but I'm not sure 
what, so I'd be grateful if anyone can help. Btw, it was the only 
test that failed when I ran the entire suite.


Thanks,

Tony.
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com 
<mailto:sqlalchemy@googlegroups.com>.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com 
<mailto:sqlalchemy@googlegroups.com>.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Any recommended approach for creating a new dialect?

2015-04-07 Thread Mike Bayer



On 4/7/15 1:59 PM, Ralph Heinkel wrote:

Hello dialect experts,

what would be the best approach for creating a SqlAlchemy dialect for 
a new database system?
Are there any recipes available for this area, or is the way to go to 
read code of existing dialects and derive my own dialect from those?


I had a first glance at some built-in dialects, and also some in 
external packages ... it is not always obvious to me why certain 
classes and methods have been implemented.
The obvious thing is to create a subclass of 
sqlalchemy.enginedefault.DefaultDialect, but how would I know which 
methods and class attributes to override/implement, except for going 
through the trial and error approach?
And then there are other classes which are implemented in some 
dialects, like compiler.DDLCompiler, compiler.GenericTypeCompiler, and 
so on ... where and how would I start best?


Any help would be very much appreciated.

Start with the README for new dialects:

https://bitbucket.org/zzzeek/sqlalchemy/src/44a9820b4e02f65b3884fa2c016efce9663e4910/README.dialects.rst?at=master

that will show the guidelines for writing new dialects.

Then to see some examples of that layout, take a look at some of the 3rd 
party dialects listed at:


http://docs.sqlalchemy.org/en/latest/dialects/index.html#production-ready

Also the "sqlalchemy-access" dialect is basically something of a "demo" 
for the layout, which I basically put there after extracting it from 
SQLAlchemy main where it had been for many years.   It might not be 100% 
up to date, but mostly follows that guideline and even passed tests at 
one point, that's at https://bitbucket.org/zzzeek/sqlalchemy-access.


The key thing you'll be doing is running the suite tests, which will be 
part of the test suite within your own dialect.  So yes, you start with 
a fairly plain subclass of DefaultDialect, then you probably want to get 
a "hello world" kind of program going where you just see if 
"create_engine" and then "engine.execute("select * from table")" work at 
all, and then the suite tests should test a lot more.








Ciao ciao

Ralph
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Specifying "itersize" when "streaming_results=True" for psycopg2 driver ?

2015-04-08 Thread Mike Bayer



On 4/8/15 6:43 AM, Dorian Hoxha wrote:
I searched documentation, mailing list, issue-list, code (on github), 
google, and couldn't find for a way to set the "iterszie", the number 
of rows to get on each batch when streaming results with psycogp2.


we don't have public API for that attribute at the moment, so unless you 
want to work with the cursor directly as in 
http://docs.sqlalchemy.org/en/rel_0_9/core/connections.html#working-with-raw-dbapi-connections, 
this feature would be an execution_option supported and documented by 
the psycopg2 dialect.  Feel free to open a feature request and/or work 
up a pull request.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Eagerly loading backrefs

2015-04-08 Thread Mike Bayer



On 4/8/15 3:22 AM, Sebastian Eckweiler wrote:

Hi there -

I'm having trouble working with backrefs of detached objects.
I'm basically working with a extended version of the code below:

|
classConfig(Base):
__tablename__ ='config'

ID =Column('ID',Integer,primary_key=True)
name =Column('name',String)
last_modified 
=Column('last_modified',DateTime,default=now,onupdate=now)


params=relationship('ConfigParam',backref='config',lazy=False)

classConfigParam(Base):

__tablename__ ='config_params'

ID =Column('ID',Integer,primary_key=True)
ConfigID=Column('ConfigID',Integer,ForeignKey('config.ID'),nullable=False)

key =Column('key',String)
value =Column('value',Float)
|

Now when I load a Config instance I'd have assumed that 
Config.params[0].config should be populated - since the eager query 
contains all necessary information.
A load that can be resolved to using the session's identity map instead 
of SELECT is still a load nonetheless.  You would need a lazy setting on 
the backref as well.   Since these are many-to-ones and the lazyload 
pulls from identity map, I suggest "immediate".


Using these settings at the mapping level does mean however that if you 
load a ConfigParam object by itself, it will, for a clean session, 
immediately issue a second SELECT statement for its Config object.






Testing this with a sqlite engine using the following code:

|

s =Session()
c =Config(name='my_config')
c.params+=[ConfigParam(key='a',value=1),
ConfigParam(key='b',value=2)]
s.add(c)
s.commit()
s.close()
delc

# reload from new session:
logging.info('\n\n')
logging.info('Starting with new session:')
s =Session()
c =s.query(Config).filter(Config.name=='my_config').one()
s.close()
forp inc.params:
logging.info('Params:%s = %s'%(p.key,p.value))
forp inc.params:
logging.info('backref: config = %s'%p.config)
|

however produces (omitting the create_all) the following output:

INFO:sqlalchemy.engine.base.Engine:()
INFO:sqlalchemy.engine.base.Engine:COMMIT
INFO:sqlalchemy.engine.base.Engine:BEGIN (implicit)
INFO:sqlalchemy.engine.base.Engine:INSERT INTO config (name, 
last_modified) VALUES (?, ?)
INFO:sqlalchemy.engine.base.Engine:('my_config', '2015-04-08 
09:04:16.111000')
INFO:sqlalchemy.engine.base.Engine:INSERT INTO config_params 
("ConfigID", "key", value) VALUES (?, ?, ?)

INFO:sqlalchemy.engine.base.Engine:(1, 'a', 1.0)
INFO:sqlalchemy.engine.base.Engine:INSERT INTO config_params 
("ConfigID", "key", value) VALUES (?, ?, ?)

INFO:sqlalchemy.engine.base.Engine:(1, 'b', 2.0)
INFO:sqlalchemy.engine.base.Engine:COMMIT
INFO:root:


INFO:root:Starting with new session:
INFO:sqlalchemy.engine.base.Engine:BEGIN (implicit)
INFO:sqlalchemy.engine.base.Engine:SELECT config."ID" AS "config_ID", 
config.name AS config_name, config.last_modified AS 
config_last_modified, config_params_1."ID" AS "config_params_1_ID", 
config_params_1."ConfigID" AS "config_params_1_ConfigID", 
config_params_1."key" AS config_params_1_key, config_params_1.value AS 
config_params_1_value
FROM config LEFT OUTER JOIN config_params AS config_params_1 ON 
config."ID" = config_params_1."ConfigID"

WHERE config.name = ?
INFO:sqlalchemy.engine.base.Engine:('my_config',)
INFO:sqlalchemy.engine.base.Engine:ROLLBACK
INFO:root:Params:a = 1.0
INFO:root:Params:b = 2.0
Traceback (most recent call last):
  File "D:/Assess/Kiln/assess/scratch/sqla_example.py", line 77, in 


logging.info('backref: config = %s' % p.config)
  File 
"D:\Anaconda\envs\assess\lib\site-packages\sqlalchemy\orm\attributes.py", 
line 239, in __get__

return self.impl.get(instance_state(instance), dict_)
  File 
"D:\Anaconda\envs\assess\lib\site-packages\sqlalchemy\orm\attributes.py", 
line 591, in get

value = self.callable_(state, passive)
  File 
"D:\Anaconda\envs\assess\lib\site-packages\sqlalchemy\orm\strategies.py", 
line 507, in _load_for_state

(orm_util.state_str(state), self.key)
sqlalchemy.orm.exc.DetachedInstanceError: Parent instance at 0x307d898> is not bound to a Session; lazy load operation of 
attribute 'config' cannot proceed


I'm somewhat new to ORM's -
naively I had assumed the backref should/could be loaded at the same 
time the parent is eagerly loaded, since all required information is 
retrieved in the above query.

Is there anything I'm missing here?

ps: This is on sqlalchemy 0.9.8 - if that should matter.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subsc

Re: [sqlalchemy] Unable to make referential integrity work

2015-04-10 Thread Mike Bayer



On 4/10/15 4:10 PM, Shola Smith wrote:
I am unable to make reference between two tables work using 
flask-sqlalchemy. Please find attached the file.


I get an error "expected string or buffer". Please, could someone tell 
me what I am doing wrong? And the best way to go about it. Thanks
the script works fine to me, though I'm running it without the flask 
dependency.   Can you post the full stack trace for the error 
message?Also what OS is this, as I see you are using case-sensitive 
identifier names for columns and MySQL has a problem with that based on 
platform.





--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Constraint naming conventions and MySQL's 64 character name limit

2015-04-10 Thread Mike Bayer



On 4/10/15 4:55 PM, Giovanni Torres wrote:



CREATE TABLE foobar (
 id INTEGER NOT NULL AUTO_INCREMENT,
 CONSTRAINT pk_foobar PRIMARY KEY (id)
)

need a test case illustrating the failure.
The main problem with this is that I can’t see a way to modify primary keys 
with alembic in a way that works with MySQL and Postgres.

MySQL lets you create a table with a proper primary key name, but then you 
can’t use the name. See 
http://dev.mysql.com/doc/refman/5.5/en/create-table.html, specifically:

"A PRIMARY KEY is a unique index where all key columns must be defined as NOT 
NULL. If they are not explicitly declared as NOT NULL, MySQL declares them so 
implicitly (and silently). A table can have only one PRIMARY KEY. The name of a 
PRIMARY KEY is always PRIMARY, which thus cannot be used as the name for any other 
kind of index.”

Therefore, if I have a table as follows, which I then try to modify the primary 
key:

t = Table('foobar', metadata,
 Column('id', Integer, primary_key=True),
 Column('foo', Integer)
 )

An Alembic script would look like this *for MySQL*:

def upgrade():
 op.drop_constraint('pk_foobar', ‘foobar’, type_=‘primary')
 op.create_primary_key(None, 'foobar', ['id', 'foo'])

However, it doesn’t work. I get: (1075, 'Incorrect table definition; there can 
be only one auto column and it must be defined as a key') 'ALTER TABLE foobar 
DROP PRIMARY KEY ' ()

alembic —sql upgrade 675e5c38a0b:head shows:

-- Running upgrade 675e5c38a0b -> c37885f5cff

ALTER TABLE foobar DROP PRIMARY KEY;

ALTER TABLE foobar ADD PRIMARY KEY (id, foo);

UPDATE alembic_version SET version_num='c37885f5cff' WHERE 
alembic_version.version_num = '675e5c38a0b’;

Then, I proceed to do it as follows:

def upgrade():
 op.execute('ALTER TABLE foobar DROP PRIMARY KEY, ADD PRIMARY KEY (id, 
foo)’)

Which works OK, however it doesn’t work in Postgres, it’s invalid SQL, to make 
it work in Postgres I would do:
well like so many openstack scripts I see you'd need to conditional this 
on MySQL for now:


if op.get_bind().name == "mysql":
op.execute(" ... ")
else:
op.drop_constraint()
op.create_primary_key(...)

within Alembic we'd probably need a new op directive that does an 
in-place alter of a PK in a backend-agnostic way.






def upgrade():
 op.drop_constraint('pk_foobar', 'foobar')
 op.create_primary_key(None, 'foobar', ['id', 'foo'])

Which works very well, alembic —sql upgrade 675e5c38a0b:head shows:

-- Running upgrade 675e5c38a0b -> c37885f5cff

ALTER TABLE foobar DROP CONSTRAINT pk_foobar;

ALTER TABLE foobar ADD CONSTRAINT pk_foobar PRIMARY KEY (id, foo);

UPDATE alembic_version SET version_num='c37885f5cff' WHERE 
alembic_version.version_num = '675e5c38a0b';

COMMIT;

To conclude, don’t know how to make it work with MySQL and Postgres.


o I also seem to be hitting a bug in sqlalchemy similar to this one: 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3067/naming-convention-exception-for-boolean.
 I get this error: InvalidRequestError: Naming convention including 
%(constraint_name)s token requires that constraint is explicitly named.

Several approaches to this.

The first is that you specify constraint_name for your Boolean type, using the 
"name" parameter.

Second, forego the use of a CHECK constraint with the Boolean type by using 
create_constraint=False.

Third, instead of using %(constraint_name)s in your CHECK constraint, you use 
%(column_0_name)s.   Barbican will have to upgrade to SQLAlchemy 1.0 for this, 
but the good news is that SQLA 1.0 will be released before the Vancouver summit 
and you can invite me to a Barbican session there in order to start selling 
this.

Thanks for the offer! However, this is a bit over my head at the moment. I’m 
just trying to get my first commit in. But, hopefully the code review generates 
some discussion and I could try to push it there. I also know one of the core 
members, which might help.


Fourth, essentially emulate 1.0's behavior by removing "ck_" from the naming convention 
and using a straight "after_parent_attach" event to set up the name; this is how naming 
conventions were done before the feature was added.  This is illustrated here: 
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/NamingConventions . This is also a 
potential approach to the foreign key issue as well.

Thanks, this seems to me the way to go. I could fix the constraint_name issue 
and the 64 character limit in one go.


I'm zzzeek on #openstack-dev if you want to chat further!

Thanks for all the help!


All this is compounded by the fact that we're also trying to support Postgres 
(which works very well by way), so we can't only cater to MySQL's nuisances.

The bug I'm trying to fix is in a future OpenStack component: 
https://bugs.launchpad.net/barbican/+bug/1415869

Any suggestion or shared experiences about how to deal with any of these issues 
is welcome!

--
Giovanni


--
You received this message because you

Re: [sqlalchemy] Unable to make referential integrity work

2015-04-10 Thread Mike Bayer
line 602, in _emit_insert_statements

execute(statement, params)
  File 
"C:\code-environs\iflask\lib\site-packages\sqlalchemy\engine\base.py", 
line 729, in execute

return meth(self, multiparams, params)
  File 
"C:\code-environs\iflask\lib\site-packages\sqlalchemy\sql\elements.py", line 
322, in _execute_on_connection

return connection._execute_clauseelement(self, multiparams, params)
  File 
"C:\code-environs\iflask\lib\site-packages\sqlalchemy\engine\base.py", 
line 826, in _execute_clauseelement

compiled_sql, distilled_params
  File 
"C:\code-environs\iflask\lib\site-packages\sqlalchemy\engine\base.py", 
line 958, in _execute_context

context)
  File 
"C:\code-environs\iflask\lib\site-packages\sqlalchemy\engine\base.py", 
line 1162, in _handle_dbapi_exception

util.reraise(*exc_info)
  File 
"C:\code-environs\iflask\lib\site-packages\sqlalchemy\engine\base.py", 
line 951, in _execute_context

context)
  File 
"C:\code-environs\iflask\lib\site-packages\sqlalchemy\engine\default.py", 
line 436, in do_execute

cursor.execute(statement, parameters)
  File 
"C:\code-environs\iflask\lib\site-packages\pymysql-0.6.6-py2.7.egg\pymysql\cursors.py", 
line 132, in execute

query = query % self._escape_args(args, conn)
  File 
"C:\code-environs\iflask\lib\site-packages\pymysql-0.6.6-py2.7.egg\pymysql\cursors.py", 
line 98, in _escape_args

return tuple(conn.escape(arg) for arg in args)
  File 
"C:\code-environs\iflask\lib\site-packages\pymysql-0.6.6-py2.7.egg\pymysql\cursors.py", 
line 98, in 

return tuple(conn.escape(arg) for arg in args)
  File 
"C:\code-environs\iflask\lib\site-packages\pymysql-0.6.6-py2.7.egg\pymysql\connections.py", 
line 729, in escape

return escape_item(obj, self.charset, mapping=mapping)
  File 
"C:\code-environs\iflask\lib\site-packages\pymysql-0.6.6-py2.7.egg\pymysql\converters.py", 
line 33, in escape_item

val = encoder(val, mapping)
  File 
"C:\code-environs\iflask\lib\site-packages\pymysql-0.6.6-py2.7.egg\pymysql\converters.py", 
line 74, in escape_unicode

    return escape_str(value, mapping)
  File 
"C:\code-environs\iflask\lib\site-packages\pymysql-0.6.6-py2.7.egg\pymysql\converters.py", 
line 71, in escape_str

return "'%s'" % escape_string(value, mapping)
  File 
"C:\code-environs\iflask\lib\site-packages\pymysql-0.6.6-py2.7.egg\pymysql\converters.py", 
line 68, in escape_string

lambda match: ESCAPE_MAP.get(match.group(0)), value),))
TypeError: expected string or buffer


On Fri, Apr 10, 2015 at 9:27 PM, Mike Bayer <mailto:mike...@zzzcomputing.com>> wrote:




On 4/10/15 4:10 PM, Shola Smith wrote:

I am unable to make reference between two tables work using
flask-sqlalchemy. Please find attached the file.

I get an error "expected string or buffer". Please, could someone
tell me what I am doing wrong? And the best way to go about it.
Thanks

the script works fine to me, though I'm running it without the
flask dependency.   Can you post the full stack trace for the
error message?Also what OS is this, as I see you are using
case-sensitive identifier names for columns and MySQL has a
problem with that based on platform.



-- 
You received this message because you are subscribed to the

Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+unsubscr...@googlegroups.com
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com
<mailto:sqlalchemy@googlegroups.com>.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to a topic in

the Google Groups "sqlalchemy" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/sqlalchemy/laklD523fhY/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to sqlalchemy+unsubscr...@googlegroups.com
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com
<mailto:sqlalchemy@googlegroups.com>.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com 
<mailto:sqlalchemy@googlegroups.com>.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Nested bundles, labels and "Ambiguous column name"

2015-04-09 Thread Mike Bayer



On 4/9/15 1:50 PM, Jacob Magnusson wrote:


I have this case with a bundle that looks something like this:

|primate_bundle = Bundle( 'primate', Primate.name, 
Bundle('wooden_tool', *[ WoodenTool.id, WoodenTool.name, 
WoodenToolCategory.name.label('category'), ]), Bundle('solid_tool', *[ 
SolidTool.id, SolidTool.name, 
SolidToolCategory.name.label('category'), ]) ) |


Then I query it like this:

|session.query(primate_bundle) .select_from(Primate) .join(WoodenTool, 
Primate.main_tool) .join(WoodenToolCategory, WoodenTool.category_id == 
WoodenToolCategory.id) .join(SolidTool, Primate.secondary_tool) 
.join(SolidToolCategory, SolidTool.category_id == 
SolidToolCategory.id) .all() |


However, since the label for category name is the same within both 
sub-bundles it will throw |Ambiguous column name| (because the 
compiled SQL labels will be exactly the same). Adding |.with_labels()| 
doesn’t fix it. Full traceback can be seen by running the included 
examples. Commenting out one of the |.label()| lines in the example 
makes it runnable. Do you guys have a clean solution to support this 
use case? I really like this feature of creating your own custom made 
results so it would be a shame to not be able to do this.


Yeah, OK I see why that is, I'll try to take a look at this later 
today.  The Bundle thing is obviously new and you're the first person 
I'm seeing actually use it.You might need to work around for now :/







Tested on SQLAlchemy 1.0.0b5 and 0.9.9. Python 3.

Thank you so much for any potential help you can give me on this. I’ve 
followed the source code for |Bundle| but I can’t think of a clean way 
to this…


​
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] listening for events on 'mapper'

2015-04-08 Thread Mike Bayer



On 4/8/15 5:14 PM, Tim Tisdall wrote:

I have some code I'm trying to figure out...  Here it is:

from sqlalchemy import event
from colanderalchemy import setup_schema
from sqlalchemy.orm import mapper

event.listen(mapper, 'mapper_configured', setup_schema)


It appears to call `setup_schema` on every ORM class that derives from 
the `declarative_base()`.  All the documentation in SQLAlchemy shows 
event.listen being called on specific classes or mappers and I'm 
wondering if this usage is a documented feature.


yes, mapper events can be passed the "mapper" function itself which is 
recognized as one way to assign an event to all mappers.



I found a mention in the 0.7 docs, but no where else: 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/events.html#sqlalchemy.orm.events.MapperEvents
that would qualify as "documented" :)   The formatting is cleaner in 
http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.MapperEvents.





--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] updating a unicode column

2015-04-08 Thread Mike Bayer



On 4/8/15 9:38 PM, arash afshar wrote:

I am calling commit, right after updated those two values.
Any suggestion on what else to check?


what is the exact column type given to this column, are there any 
event.listen() things going on, and also what does a repr() of the 
object's __dict__ look like?


in particular, Python has a really sneaky issue where if you happen to 
have an errant comma:


x = "foo",


you get:

("foo", )






On Wednesday, April 8, 2015 at 7:33:49 PM UTC-6, Michael Bayer wrote:



On 4/8/15 9:16 PM, arash afshar wrote:

When I try to update a row using ORM in the following way, it
shows the following error:
InterfaceError: (InterfaceError) Error binding parameter 0 -
probably unsupported type. u'UPDATE table_name SET unicode_col=?,
int_col=? WHERE table_name.id  = ?'
((u'abcd',), 1, 2)


I suspect the problem is that the value corresponding to
unicode_col is (u'abcd',) as opposed to a simple u'abcd'.

that is the problem.



Here is how I update the database:
1) I first query it and obtain an object representing the row
that I need   -->  file_row
2) I perform some checks and change the values  -->
file_row.int_col = 1followed by file_row.unicode_col = u"abcd"
3) I call commit on the session



looks fine, so, I'd imagine that somehow that unicode_col =
u'abcd' is not what it seems.  Either that's not what's there or
something is interfering with it after the fact.





I am pretty sure the problem is not the original type of value of
the unicode_col since the following succeeds without error:

query(TableName).filter(TableName.id == file_row.id
).update({'unicode_col': u"abcd", 'int_col': 1})


Any idea how to fix the error?
Thanks
-- 
You received this message because you are subscribed to the

Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+...@googlegroups.com .
To post to this group, send email to sqlal...@googlegroups.com
.
Visit this group at http://groups.google.com/group/sqlalchemy
.
For more options, visit https://groups.google.com/d/optout
.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] updating a unicode column

2015-04-08 Thread Mike Bayer



On 4/8/15 9:16 PM, arash afshar wrote:
When I try to update a row using ORM in the following way, it shows 
the following error:
InterfaceError: (InterfaceError) Error binding parameter 0 - probably 
unsupported type. u'UPDATE table_name SET unicode_col=?, int_col=? 
WHERE table_name.id = ?' ((u'abcd',), 1, 2)



I suspect the problem is that the value corresponding to unicode_col 
is (u'abcd',) as opposed to a simple u'abcd'.

that is the problem.



Here is how I update the database:
1) I first query it and obtain an object representing the row that I 
need   -->  file_row
2) I perform some checks and change the values  --> file_row.int_col = 
1followed by   file_row.unicode_col = u"abcd"

3) I call commit on the session



looks fine, so, I'd imagine that somehow that unicode_col = u'abcd' is 
not what it seems.  Either that's not what's there or something is 
interfering with it after the fact.





I am pretty sure the problem is not the original type of value of the 
unicode_col since the following succeeds without error:


query(TableName).filter(TableName.id == 
file_row.id).update({'unicode_col': u"abcd", 'int_col': 1})



Any idea how to fix the error?
Thanks
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] use of between() with Column.op()

2015-04-10 Thread Mike Bayer



On 4/10/15 10:23 PM, Horcle wrote:
I really dig use of the column operator for constructing queries, but 
have been unsuccessful with using this when the argument is "between." 
I read somewhere that Column.op(var) when var = "in_" does not work, 
so I would assume that this is true with "between."



between is a three-op, that is, it is "x BETWEEN a AND b".  op() isn't 
built with that in mind but you can always chain, such as:


>>> print column('x').op('BETWEEN')(and_(column('a'), column('b')))
x BETWEEN a AND b


depends on what you want to do really (why not just use between() ?)




Am I doing something wrong, or is this an expected behavior? If so, is 
there a more general way to deal with evaluating all passed column 
operators so that I don't have to have separate conditions for the 
between and in operators?


Thanks in advance!

Greg--

--
Greg M. Silverman
Senior Developer Analyst
Cardiovascular Informatics 
University of Minnesota


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] GeoAlchemy2 mutation tracking on Geometry Column?

2015-04-13 Thread Mike Bayer




On 4/9/15 1:38 PM, joe meiring wrote:
Is there some way to implement mutation tracking on a sqlalchemy2 
Geometry (POLYGON) Column? Can I just wrap it in a MutableDict can I?


I haven't worked with geoalchemy in many years, I'd advise just give it 
a try and/or dig into geoalchemy's source to see if that's feasible.





--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Decimals generated as strings in query

2015-04-13 Thread Mike Bayer



On 4/13/15 11:50 AM, Gabriel Becedillas wrote:

Dear all,
I have a table that has 2 numeric columns, and I'm writing a query 
that performs some arithmetic on the filter clause between those 
columns and a Decimal. The problem that I'm facing is that I don't get 
any results at all. After a while I realized that the SQL statement 
getting generated is dumping Decimals as strings, and when strings are 
involved in a numeric expression they get converted to floats. So, my 
query is not returning anything at all due to float representation 
limitations.


I tried casting my decimals using sqlalcheme.cast(..., 
sqlalchemy.Numeric(precision=16, scale=8)) but that didn't work 
because the filter expression failed. Can anyone help me in getting a 
cast over a query parameter to work in a filter expression ?


this is driver stuff.  If you change the query to see what you're getting:

query = session.query(Balance.available_balance + amount, 
Balance.full_balance)


you can see there's some floating point noise in there, not to mention 
we're even getting the value back as a floating point:


Col (u'anon_1', u'balance_full_balance')
2015-04-13 13:10:39,798 DEBUG sqlalchemy.engine.base.Engine Row 
(3.0004e-08, Decimal('3E-8'))


I'm getting the same result with MySQL-Python, PyMySQL, and 
Mysql-connector.   The issue is definitely in the drivers however, the 
code below produces no result for all three drivers:


conn = engine.raw_connection()
cursor = conn.cursor()
cursor.execute(
"SELECT balance.available_balance + %s AS anon_1, "
"balance.full_balance AS balance_full_balance FROM balance "
"WHERE balance.available_balance + %s <= balance.full_balance",
(amount, amount))

print cursor.fetchall()

If I embed the number 000.1 as is into the query without using a 
parameter, then it works, as it does on the command line.


Looking into PyMySQL since that's the driver I'm most familiar with, if 
we send a Decimal it is doing this:


SELECT balance.available_balance + 1E-8 AS anon_1, balance.full_balance 
AS balance_full_balance FROM balance WHERE balance.available_balance + 
1E-8 <= balance.full_balance


What is interesting is that these values as passed in an INSERT *are* 
working correctly.Which means really, this is a MySQL bug; I can 
prove it at the command line.


First, we illustrate that scientific notation *is* accepted directly by 
MySQL:


mysql> insert into balance (full_balance, available_balance) values 
(3E-8, 2E-8);

Query OK, 1 row affected (0.00 sec)


values go in just fine (note I increased the scale in the table here, 
hence the two trailing 0's):


mysql> select * from balance;
++--+---+
| id | full_balance | available_balance |
++--+---+
|  2 | 0.000300 |  0.000200 |
++--+---+
1 row in set (0.00 sec)

but in the WHERE clause, *it fails*:

mysql> select * from balance where available_balance + 1E-8 <= full_balance;
Empty set (0.00 sec)

writing out the whole value, *it succeeds*:

mysql> select * from balance where available_balance + 0.0001 <= 
full_balance;

++--+---+
| id | full_balance | available_balance |
++--+---+
|  2 | 0.000300 |  0.000200 |
++--+---+
1 row in set (0.00 sec)

we can see that *MySQL itself is doing floating point*, so that's really 
the bug here:


mysql> select available_balance + 1E-8 from balance;
++
| available_balance + 1E-8   |
++
| 0.00030004 |
++
1 row in set (0.00 sec)

We can in fact make it work with a CAST.  However!  crazytown time. Even 
though NUMERIC and DECIMAL are equivalent in MySQL, cast will *not* 
accept NUMERIC (note SQLAlchemy only warns on these and only as of 1.0 I 
think):


mysql> select available_balance + CAST(1E-8 AS NUMERIC) from balance;
ERROR 1064 (42000): You have an error in your SQL syntax; check the 
manual that corresponds to your MySQL server version for the right 
syntax to use near 'NUMERIC) from balance' at line 1


But it *will* accept DECIMAL:

mysql> select available_balance + CAST(1E-8 AS DECIMAL) from balance;
+---+
| available_balance + CAST(1E-8 AS DECIMAL) |
+---+
|  0.000200 |
+---+
1 row in set (0.00 sec)

So there's our answer:

from sqlalchemy import cast, DECIMAL

amount = decimal.Decimal('0.0001')
query = session.query(Balance.available_balance + cast(amount, 
DECIMAL()), Balance.full_balance)

query = query.filter(
Balance.available_balance + cast(amount, DECIMAL()) <= 
Balance.full_balance

)


SELECT balance.available_balance + CAST(%s AS DECIMAL)

Re: [sqlalchemy] Decimals generated as strings in query

2015-04-13 Thread Mike Bayer



On 4/13/15 2:25 PM, Gabriel Becedillas wrote:

Dear Michael,
Thanks a lot for your reply.
In trying to narrow the problem as much as possible, I missed 
something important in my example. I'm actually doing an UPDATE, not a 
SELECT. When I wrote 'I tried casting my decimals using 
sqlalcheme.cast(..., sqlalchemy.Numeric(precision=16, scale=8)) but 
that didn't work because the filter expression failed' I meant that it 
didn't work in an update scenario. In a select scenario it works ok. 
This is what I should have wrote in my bug_test.py:


amount = decimal.Decimal('0.0001')
query = session.query(Balance)
query = query.filter(
Balance.available_balance + sqlalchemy.cast(amount,
sqlalchemy.Numeric(precision=16, scale=8)) <= Balance.full_balance
)

values = {}
values[Balance.available_balance] = Balance.available_balance + amount
row_count = query.update(values)
print row_count, "rows updated"


and the error I get is 'sqlalchemy.exc.InvalidRequestError: Could not 
evaluate current criteria in Python. Specify 'fetch' or False for the 
synchronize_session parameter.'. This is not even getting to MySQL.

for query.update() you usually want to send synchronize_session=False.

also the cast() needs to be Decimal(), not Numeric().  Will not work 
with Numeric().






Thanks a lot

On Monday, April 13, 2015 at 2:46:14 PM UTC-3, Michael Bayer wrote:



On 4/13/15 11:50 AM, Gabriel Becedillas wrote:

Dear all,
I have a table that has 2 numeric columns, and I'm writing a
query that performs some arithmetic on the filter clause between
those columns and a Decimal. The problem that I'm facing is that
I don't get any results at all. After a while I realized that the
SQL statement getting generated is dumping Decimals as strings,
and when strings are involved in a numeric expression they get
converted to floats. So, my query is not returning anything at
all due to float representation limitations.

I tried casting my decimals using sqlalcheme.cast(...,
sqlalchemy.Numeric(precision=16, scale=8)) but that didn't work
because the filter expression failed. Can anyone help me in
getting a cast over a query parameter to work in a filter
expression ?


this is driver stuff.  If you change the query to see what you're
getting:

query = session.query(Balance.available_balance + amount,
Balance.full_balance)

you can see there's some floating point noise in there, not to
mention we're even getting the value back as a floating point:

Col (u'anon_1', u'balance_full_balance')
2015-04-13 13:10:39,798 DEBUG sqlalchemy.engine.base.Engine Row
(3.0004e-08, Decimal('3E-8'))

I'm getting the same result with MySQL-Python, PyMySQL, and
Mysql-connector.   The issue is definitely in the drivers however,
the code below produces no result for all three drivers:

conn = engine.raw_connection()
cursor = conn.cursor()
cursor.execute(
"SELECT balance.available_balance + %s AS anon_1, "
"balance.full_balance AS balance_full_balance FROM balance "
"WHERE balance.available_balance + %s <=
balance.full_balance",
(amount, amount))

print cursor.fetchall()

If I embed the number 000.1 as is into the query without using
a parameter, then it works, as it does on the command line.

Looking into PyMySQL since that's the driver I'm most familiar
with, if we send a Decimal it is doing this:

SELECT balance.available_balance + 1E-8 AS anon_1,
balance.full_balance AS balance_full_balance FROM balance WHERE
balance.available_balance + 1E-8 <= balance.full_balance

What is interesting is that these values as passed in an INSERT
*are* working correctly.Which means really, this is a MySQL
bug; I can prove it at the command line.

First, we illustrate that scientific notation *is* accepted
directly by MySQL:

mysql> insert into balance (full_balance, available_balance)
values (3E-8, 2E-8);
Query OK, 1 row affected (0.00 sec)


values go in just fine (note I increased the scale in the table
here, hence the two trailing 0's):

mysql> select * from balance;
++--+---+
| id | full_balance | available_balance |
++--+---+
|  2 | 0.000300 |  0.000200 |
++--+---+
1 row in set (0.00 sec)

but in the WHERE clause, *it fails*:

mysql> select * from balance where available_balance + 1E-8 <=
full_balance;
Empty set (0.00 sec)

writing out the whole value, *it succeeds*:

mysql> select * from balance where available_balance + 0.0001
<= full_balance;
++--+---+
| id | full_balance | available_balance |
++--+---+
| 

Re: [sqlalchemy] polymorphic inheritance and unique constraints

2015-04-13 Thread Mike Bayer



On 4/13/15 4:59 PM, Richard Gerd Kuesters | Pollux Automation wrote:
well, this didn't work with upstream 1.0 - sorry, I was in another 
project and couldn't test it myself.



you're not doing the same thing this user was doing in any case...



Traceback (most recent call last):
  File "database_test.py", line 46, in 
from plx.db.core import *
  File "../src/plx/db/core.py", line 901, in 
UniqueConstraint(ContainerInstance.batch_id, 
ContainerAggregation.container_descriptor_id,)
  File 
"/home/richard/.pyenv/versions/vpak-pollux-2.7.9/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", 
line 2464, in __init__
ColumnCollectionMixin.__init__(self, *columns, 
_autoattach=_autoattach)
  File 
"/home/richard/.pyenv/versions/vpak-pollux-2.7.9/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", 
line 2393, in __init__

self._check_attach()
  File 
"/home/richard/.pyenv/versions/vpak-pollux-2.7.9/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", 
line 2429, in _check_attach

table.description)
sqlalchemy.exc.ArgumentError: Column(s) 
'container_aggregation.fk_container_descriptor_id' are not part of 
table 'container_instance'.


I got sqlalchemy from git, today.

>>> sqlalchemy.__version__
'1.0.0'

container_aggretation is a subclass of container_instance. I'm not 
using concrete inheritance here, may this be the problem?


anything else, it's Python 2.7.9 + Linux + PostgreSQL 9.4.1.


cheers,
richard.

On 03/24/2015 08:49 PM, Michael Bayer wrote:

are these two separate constraints?  I just looked and it seems like they are 
distinct.

I just added a fix to 1.0 because someone was hacking around something similar 
to this.

The easiest way to get these for the moment is just to create the 
UniqueConstraint outside of the class definition.

class Foo(Base):
 # …

class Bar(Foo):
# …

UniqueConstraint(Bar.x, Foo.y)

that way all the columns are set up, should just work.



Richard Gerd Kuesters | Pollux  wrote:


well, understanding better the docs for column conflicts, can i use a 
declared_attr in a unique constraint? if yes, my problem is solved :)


On 03/24/2015 10:33 AM, Michael Bayer wrote:

Richard Gerd Kuesters | Pollux

  wrote:



hi all!

i'm dealing with a little problem here. i have a parent table and its two 
inheritances. there is a value that both children have and must be unique along 
either types. is there a way to move this column to the parent and use a 
constraint in the child? my implementation is postgres 9.4+ with psycopg2 only.

if this is single table inheritance then the constraint would most ideally
be placed on the parent class.

if you’re trying to make this “magic” such that you can semantically keep
the unique constraints on the child classes, you’d need to build out a
conditional approach within @declared_attr. IMO I think this is an idealized
edge case that in the real world doesn’t matter much - just do what works
(put the col / constraint on the base).

the approach is described at

http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/declarative/inheritance.html#resolving-column-conflicts
.
You’d need to make this work for both the column and the constraint.




as a simple example (i'm just creating this example to simplify things), this 
works:

class MyParent(Base):

 foo_id = Column(Integer, Sequence('foo_id_seq'), primary_key=True)
 foo_name = Column(Unicode(64), nullable=False)
 foo_type = Column(Integer, nullable=False)

 __mapper_args__ = {
 "polymorphic_on": foo_type,
 "polymorphic_identity": 0
 }


class MyChild1(MyParent):

 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), nullable=False)
 child1_specific_name = Column(Unicode(5), nullable=False)
 child1_baz_stuff = Column(Boolean, default=False)

 __mapper_args__ = {
 "polymorphic_identity": 1
 }

 __table_args__ = (
 UniqueConstraint(bar_id, child1_specific_name,),  # works, bar_id is 
in MyChild1
 )


class MyChild2(MyParent):

 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), nullable=False)
 child2_specific_code = Column(UUID, nullable=False)
 child2_baz_stuff = Column(Float, nullable=False)
 
 __mapper_args__ = {

 "polymorphic_identity": 2
 }

 __table_args__ = (
 UniqueConstraint(bar_id, child2_specific_code,),  # works, bar_id is 
in MyChild2
 )


but i would like to do this, if possible:

class MyParent(Base):

 foo_id = Column(Integer, Sequence('foo_id_seq'), primary_key=True)
 foo_name = Column(Unicode(64), nullable=False)
 foo_type = Column(Integer, nullable=False)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), nullable=False) 
 # since both child uses bar_id, why not having it on the parent?


Re: [sqlalchemy] "evaluate" strategy for a bulk delete seems to mishandle cases where column and attribute names differ

2015-04-14 Thread Mike Bayer



On 4/14/15 6:38 AM, Steven Winfield wrote:

Hi,

I think I've found a bug triggered by bulk deletes that use the 
(default) "evaluate" strategy.


a bug is created at 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3365/evaluator-cant-locate-orm-entity-when 
and for now you need to compare using the columns, not the relationship, 
e.g. Child._id_parent == parent.id, if you want to use evaluate there.





For example:

from sqlalchemy import Column, Integer, Text, ForeignKey, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship

Base = declarative_base()

class Parent(Base):
__tablename__ = "parent"
id = Column(Integer, primary_key=True)

class Child(Base):
__tablename__ = "child"
_id_parent = Column("id_parent", Integer, ForeignKey(Parent.id), 
primary_key=True)

name = Column(Text, primary_key=True)
parent = relationship(Parent)

engine = create_engine('sqlite://')
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
session.bind.echo = True

# Make a parent
p = Parent(id=1)
session.add(p)
session.commit()

# Add a child
c = Child(name="foo", parent=p)
session.add(c)
session.commit()
# c is still in the session

session.query(Child).filter(Child.parent == p).delete("evaluate")

...give the following traceback:

File user!winfis!test_bed.py, line 34, in : 
session.query(Child).filter(Child.parent == p).delete("evaluate") 

File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py, 
line 2670, in delete : delete_op.exec_() 

File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\persistence.py, 
line 896, in exec_ : self._do_pre_synchronize() 

File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\persistence.py, 
line 958, in _do_pre_synchronize : eval_condition(obj)] 

File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\evaluator.py, 
line 116, in evaluate : right_val = eval_right(obj) 

File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\evaluator.py, 
line 72, in : return lambda obj: get_corresponding_attr(obj) 


AttributeError: 'Child' object has no attribute 'id_parent'


...because the attribute lookup on Child is attempted using the column 
name, rather than the attribute name. The actual delete action works 
fine when I switch the strategy to False or "fetch", or if there are 
no Child objects in the session (so no evaluation is invoked).



As you can see, this is v0.9.7, but I've not seen anything relevant in 
the changelog since then.


Cheers,
Steve.
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy's merge doesn't fill foreign keys with ids of new objects in relationship

2015-04-14 Thread Mike Bayer



On 4/14/15 3:55 AM, Юрий Пайков wrote:


My question is when I have in a session a newly created object(doesn't 
have primary key yet, but will obtain it upon flush) and I merge to 
that session another object referring to the first one by relationship 
(/b/ in the example) SQLAlchemy doesn't populate latter object with 
the primary key from the former. Instead it just generate next value 
from the sequence. Why is it the case ?




that's not what I see happening here.   I see very simply that the B.id_ 
column is a SERIAL so is linked to a sequence, however you are inserting 
a row with a hardcoded "1" for a primary key; so the second B object, 
which relies on the sequence, fails due to an identity conflict.


So let's repair the test case first, and that first B.id we'll set to 
"10" so that it doesn't conflict.


Now we get the error you probably intended to send:

SELECT "Rel".id_a AS "Rel_id_a", "Rel".id_b AS "Rel_id_b", 
"Rel".rel_data AS "Rel_rel_data"

FROM "Rel"
WHERE "Rel".id_a = %(param_1)s AND "Rel".id_b = %(param_2)s
2015-04-14 11:05:11,850 INFO sqlalchemy.engine.base.Engine {'param_1': 
800, 'param_2': symbol('NEVER_SET')}


where this is, the merge() is proceeding to attempt to locate the object 
by primary key but the PK is not filled in.  This is the expected 
behavior.   The primary key of an object is never auto-populated until 
it is flushed.   So here, if you are passing in a transient object, you 
need to set the PK yourself:


second_b = B(data='f')
session.add(second_b)
session.flush()
x = session.merge(Rel(id_a=800, rel_data="second", id_b=second_b.id_))





|fromsqlalchemy importcreate_engine fromsqlalchemy.orm 
importsessionmaker engine 
=create_engine("postgresql+psycopg2://psql_admin:psql_admin@localhost/fm")fromsqlalchemy.ext.declarative 
importdeclarative_base Base=declarative_base()fromsqlalchemy 
importInteger,ForeignKey,VARCHAR,TEXT,Boolean,DateTimefromsqlalchemy.orm 
importrelationship fromsqlalchemy.sql.schema 
importColumnclassB(Base):__tablename__='B'id_=Column(Integer,primary_key=True)data 
=Column(VARCHAR(30))classRel(Base):__tablename__='Rel'id_a=Column(Integer,primary_key=True)id_b=Column(Integer,ForeignKey('B.id_'),primary_key=True)b 
=relationship(B)rel_data=Column(VARCHAR(30))Session=sessionmaker(bind=engine)session 
=Session()Base.metadata.create_all(engine,checkfirst=True)first_b=B(id_=1,data='ololo')session.add(first_b)session.commit()session.add(Rel(id_a=800,id_b=1,rel_data='first 
relation 
data'))second_b=B(data='f')session.add(second_b)x=session.merge(Rel(id_a=800,rel_data="second",b=second_b))session.commit()|


Here I have an error

IntegrityError: (raised as a result of Query-invoked autoflush;
consider using a session.no_autoflush block if this flush is
occuring prematurely) (IntegrityError) duplicate key value
violates unique constraint "B_pkey" DETAIL: Key (id_)=(1) already
exists. 'INSERT INTO "B" (data) VALUES (%(data)s) RETURNING
"B".id_' {'data': 'f'}

--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Not able to filter column by datetime

2015-04-15 Thread Mike Bayer



On 4/15/15 6:52 AM, Juan Antonio Ibáñez wrote:

Hello,

   I have dozens of queries filtering DateTime columns but I have got 
one I don't know why it doesn't work. I have:


---
q = DBSession.query(func.sum(RecargaCredito.importe), Local.nombre)\
.join((Local, RecargaCredito.locales_id == Local.id))

fl_desde = (datetime.now() - timedelta(days=7)).replace(hour=0, 
minute=0, second=0)

fl_hasta = datetime.now().replace(hour=23, minute=59, second=59)

q = q.filter(RecargaCredito.fechayhora >= fl_desde).\
filter(RecargaCredito.fechayhora <= fl_hasta)

q = q.group_by(RecargaCredito.locales_id)

f = q.all()
---


RecargaCredito looks:


---
class RecargaCredito(DeclarativeBase):
__tablename__ = 'recargas_credito'
__table_args__ = {'mysql_engine':'InnoDB'}

id=Column(Integer, primary_key=True)
fechayhora = DateTime
importe = Column(DECIMAL(9,2))
locales_id=Column(Integer, ForeignKey('locales.id'))
local = relation('Local', remote_side = 'Local.id')
---

and I get error 'TypeError: can't compare datetime.datetime to 
VisitableType'


you seem to be missing "Column()" for fechayhora:

fechayhora  = Column(DateTime)






I've checked fl_desde and fl_hasta are python datetime objects

Do you know what that error says?

Regards


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] polymorphic inheritance and unique constraints

2015-04-15 Thread Mike Bayer



On 4/15/15 1:07 PM, Richard Gerd Kuesters wrote:


ok, now i have an issue. i don't know why, but sqlalchemy seems to 
issue the create table command of inherited postgresql tables before 
the base one in "metadata.create_all()". commenting the inherited 
table, issuing create all, 
what do your table defs look like?   The tables are created in order of 
FK dependency, and up until 1.0 there was no ordering beyond that.  in 
1.0 they will be by table name if there are no FK dependencies.


if you're using this with INHERITS types of setups then you should 
establish which table is dependent on which using add_is_dependent_on():


http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html?highlight=add_is_dependent_on#sqlalchemy.schema.Table.add_is_dependent_on





then uncomment the table and issuing create all again seems to work, 
but ... it's a heck of a workaround (if i think in terms of code).


i even tried to use serializable isolation level, but no result. 
importing models in the desired order doesn't affect the behavior 
either. well, i'm out of options :)


a little help?


best regards,
richard.

On 04/15/2015 11:48 AM, Richard Gerd Kuesters wrote:


nevermind. i'm again victim of rtfm :)

http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#postgresql-table-options

great work on this, btw. it'll simplify my life *A LOT* :)


best regards,
richard.

On 04/15/2015 10:10 AM, Richard Gerd Kuesters wrote:

hello Mike!

so ... ok, based on this link 
<http://docs.sqlalchemy.org/en/latest/faq/ormconfiguration.html#i-m-getting-a-warning-or-error-about-implicitly-combining-column-x-under-attribute-y> 
(yeah yeah, well, rtfm for me), I was able to make it work like this:




class ContainerInstance(CoreMixin, TimestampMixin):

container_instance_id = CoreMixin.column_id()
parent_id = CoreMixin.column_fk(container_instance_id,
nullable=False)
batch_id = CoreMixin.column_fk(Batch.id_, nullable=False)
container_instance_type =
Column(EnumDictForInt(ContainerInstanceEnum), nullable=False,
index=True)

__mapper_args__ = {
"polymorphic_on": container_instance_type,
"polymorphic_identity": ContainerInstanceEnum.NONE
}


class ContainerAggregation(ContainerInstance):

container_instance_id =
CoreMixin.column_fk(ContainerInstance.id_, primary_key=True)
container_descriptor_id =
CoreMixin.column_fk(ContainerDescriptor.id_, nullable=False)
# "tada!"
batch_id = column_property(Column(BigInteger),
ContainerInstance.batch_id)

__mapper_args__ = {
"polymorphic_identity": ContainerInstanceEnum.AGGREGATION
}


UniqueConstraint(ContainerAggregation.container_descriptor_id,
ContainerAggregation.batch_id)




which brings me the question: I'm targeting *only* postgresql, so I 
have no need to pursue an agnostic approach in terms of inheritance. 
i do know that postgres inheritance system was discussed a lot in 
here, but, in my case - where i do want to have a constraint between 
parent and children - isn't it better to use postgres inheritance 
instead of duplicating the value to another table?


well, i did notice the sqlalchemy example of postgres inheritance 
<https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/PostgreSQLInheritance>, 
which uses written ddl and triggers. the problem is that i have 
extra fields in the inheritance table, which I think it is not a 
very good approach to postgres inheritance, but, either way, from 
your experience, what would be your tip?


ps: i found this link 
<http://postgresql.nabble.com/Constraint-to-ensure-value-does-NOT-exist-in-another-table-td4493651.html> 
interesting and may be another solution to this, since i already 
have a table descriptor (the polymorphic_on expression). of course, 
the approach does require an extra table, but with events I can 
easily make it work in sqlalchemy.



cheers,
richard.


On 04/14/2015 08:40 AM, Richard Gerd Kuesters | Pollux Automation wrote:

here, a better illustration with my actual code:

http://pastebin.com/RxS8Lzft


best regards,
richard.

On 04/13/2015 06:30 PM, Mike Bayer wrote:



On 4/13/15 4:59 PM, Richard Gerd Kuesters | Pollux Automation wrote:
well, this didn't work with upstream 1.0 - sorry, I was in 
another project and couldn't test it myself.



you're not doing the same thing this user was doing in any case...



Traceback (most recent call last):
  File "database_test.py", line 46, in 
from plx.db.core import *
  File "../src/plx/db/core.py", line 901, in 
UniqueConstraint(ContainerInstance.batch_id, 
ContainerAggregation.container_descriptor_id,)
  File 
"/home/richard/.pyenv/versions/vpak-pollux-2.7.9/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", 
li

Re: [sqlalchemy] polymorphic inheritance and unique constraints

2015-04-15 Thread Mike Bayer



On 4/15/15 1:59 PM, Richard Gerd Kuesters wrote:
oops, i forgot to comment out the fk to the parent table and now it 
doesn't work: "sqlalchemy.exc.NoForeignKeysError: Can't find any 
foreign key relationships between 'container_instance' and 
'container_aggregation'."


well, it doesn't need it if it's inherited (both db and software 
level), right?

correct, you'd use a "concrete" setup here from a SQLA perspective.







On 04/15/2015 02:55 PM, Richard Gerd Kuesters wrote:

the table definitions are listed here: http://pastebin.com/RxS8Lzft

i'm using polymorphic associations, but with inheritance (INHERITS) 
there's no need to do it (imho), so the fk column to the parent table 
(which is also the pk) can be overriden.


using "add_is_dependent_on" did the trick. i didn't know of such a 
feature ... thanks for bring it on :) although, is there a way to use 
it in declarative, intead of: 
MyModel.__table__.add_is_dependent_on(MyParentModel.__table__) ?



cheers,
richard.

On 04/15/2015 02:44 PM, Mike Bayer wrote:



On 4/15/15 1:07 PM, Richard Gerd Kuesters wrote:


ok, now i have an issue. i don't know why, but sqlalchemy seems to 
issue the create table command of inherited postgresql tables 
before the base one in "metadata.create_all()". commenting the 
inherited table, issuing create all, 
what do your table defs look like?   The tables are created in order 
of FK dependency, and up until 1.0 there was no ordering beyond 
that.  in 1.0 they will be by table name if there are no FK 
dependencies.


if you're using this with INHERITS types of setups then you should 
establish which table is dependent on which using add_is_dependent_on():


http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html?highlight=add_is_dependent_on#sqlalchemy.schema.Table.add_is_dependent_on





then uncomment the table and issuing create all again seems to 
work, but ... it's a heck of a workaround (if i think in terms of 
code).


i even tried to use serializable isolation level, but no result. 
importing models in the desired order doesn't affect the behavior 
either. well, i'm out of options :)


a little help?


best regards,
richard.

On 04/15/2015 11:48 AM, Richard Gerd Kuesters wrote:


nevermind. i'm again victim of rtfm :)

http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#postgresql-table-options

great work on this, btw. it'll simplify my life *A LOT* :)


best regards,
richard.

On 04/15/2015 10:10 AM, Richard Gerd Kuesters wrote:

hello Mike!

so ... ok, based on this link 
<http://docs.sqlalchemy.org/en/latest/faq/ormconfiguration.html#i-m-getting-a-warning-or-error-about-implicitly-combining-column-x-under-attribute-y> 
(yeah yeah, well, rtfm for me), I was able to make it work like this:




class ContainerInstance(CoreMixin, TimestampMixin):

container_instance_id = CoreMixin.column_id()
parent_id = CoreMixin.column_fk(container_instance_id,
nullable=False)
batch_id = CoreMixin.column_fk(Batch.id_, nullable=False)
container_instance_type =
Column(EnumDictForInt(ContainerInstanceEnum), nullable=False,
index=True)

__mapper_args__ = {
"polymorphic_on": container_instance_type,
"polymorphic_identity": ContainerInstanceEnum.NONE
}


class ContainerAggregation(ContainerInstance):

container_instance_id =
CoreMixin.column_fk(ContainerInstance.id_, primary_key=True)
container_descriptor_id =
CoreMixin.column_fk(ContainerDescriptor.id_, nullable=False)
# "tada!"
batch_id = column_property(Column(BigInteger),
ContainerInstance.batch_id)

__mapper_args__ = {
"polymorphic_identity": ContainerInstanceEnum.AGGREGATION
}


UniqueConstraint(ContainerAggregation.container_descriptor_id, 
ContainerAggregation.batch_id)




which brings me the question: I'm targeting *only* postgresql, so 
I have no need to pursue an agnostic approach in terms of 
inheritance. i do know that postgres inheritance system was 
discussed a lot in here, but, in my case - where i do want to 
have a constraint between parent and children - isn't it better 
to use postgres inheritance instead of duplicating the value to 
another table?


well, i did notice the sqlalchemy example of postgres inheritance 
<https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/PostgreSQLInheritance>, 
which uses written ddl and triggers. the problem is that i have 
extra fields in the inheritance table, which I think it is not a 
very good approach to postgres inheritance, but, either way, from 
your experience, what would be your tip?


ps: i found this link 
<http://postgresql.nabble.com/Constraint-to-ensure-value-does-NOT-exist-in-another-table-td4493651.html> 
interest

Re: [sqlalchemy] polymorphic inheritance and unique constraints

2015-04-15 Thread Mike Bayer



On 4/15/15 2:44 PM, Richard Gerd Kuesters wrote:
well, i'm almost given up ... i'm using concrete now, but it seems 
that something isn't right.


the error:

sqlalchemy.exc.ArgumentError: When configuring property
'updated_by' on Mapper|ContainerInstance|pjoin, column
'container_instance.fk_updated_by' is not represented in the
mapper's table. Use the `column_property()` function to force this
column to be mapped as a read-only attribute.

now, what makes me a little hopeless:

1. i have a base object (a simple object), that have some attributes 
that i want in ALL of my tables (created_at, updated_at, created_by, 
upated_by), which all of them are @declared_attr;
2. my base object is a declarative_base which uses the object above 
described as the "cls" parameter;
3. then, i inherit AbstractConcreteBase and my declarative object to 
the parent class, having all FKs in it as @declared_attr too;

4. from bla import *, exception.

ps: using ConcreteBase, the error is: "AttributeError: type object 
'ContainerInstance' has no attribute '__mapper__'"


The pattern you're doing is not what Posgresql INHERITS is really 
intended for.   PG's feature is intended for transparent sharding of 
data to different tablespaces, not to simulate OR-mapped class 
hierarchies.  The keyword is mis-named in this regard.  Concrete inh 
is in all cases a tough road to travel because it's difficult to relate 
things to a whole set of tables which each act as "the table" for a class.









On 04/15/2015 03:13 PM, Richard Gerd Kuesters wrote:

oh, right, concrete! abstract concrete can also do the trick?


On 04/15/2015 03:10 PM, Mike Bayer wrote:



On 4/15/15 1:59 PM, Richard Gerd Kuesters wrote:
oops, i forgot to comment out the fk to the parent table and now it 
doesn't work: "sqlalchemy.exc.NoForeignKeysError: Can't find any 
foreign key relationships between 'container_instance' and 
'container_aggregation'."


well, it doesn't need it if it's inherited (both db and software 
level), right?

correct, you'd use a "concrete" setup here from a SQLA perspective.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com 
<mailto:sqlalchemy@googlegroups.com>.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
default-signature
Atenciosamente,

*Richard Gerd Kuesters*
*Pollux Automation*
Tel.: (47) 3025-9019
rich...@pollux.com.br | www.pollux.com.br
<http://www.pollux.com.br/>



• Linhas de Montagem
• Inspeção e Testes
• Robótica
• Identificação e Rastreabilidade
• Software para Manufatura


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com 
<mailto:sqlalchemy@googlegroups.com>.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] [SQLAlchemy 0.8.2 - NULL value support for version_id_col attribute in mapper()]

2015-04-15 Thread Mike Bayer



On 4/15/15 10:38 PM, Khoa Nguyen Minh wrote:

Hi everyone,

Could you please tell me whether SQLAlchemy 0.8.2 supports NULL value 
when using version_id_col? Currently, the correspond SQL emitted when 
updating/deleting is:


*where [version_column] = NULL*
*
*
That always raises StaleDataError exception. Is there a way we can 
emit custom SQL so that when the original value is NULL then the 
emitted SQL will be:


*where [version_column] is NULL*


NULL can't be supported because the value used for version_id is 
embedded into a fixed UPDATE statement using a bound parameter.  There 
is no opportunity to produce an alternate UPDATE statement that doesn't 
use the parameter and encodes "IS NULL". The same statement can be 
passed to the DBAPI with a list of parameter sets.





and in normal case, the emitted SQL will be:

*where [version_column] = [value]*

Also, I think that if SQLAlchemy provides the custom 
version_id_generator then why it doesn't support *custom 
version_id_col where clause when updating/editing*?
versioning schemes typically only vary in the kinds of values they 
persist, e.g. integer counters, timestamps, GUIDs, etc.   It's not 
really necessary for a versioning scheme to support NULL; a NULL version 
doesn't make any sense because it means that a record has an unknown 
version.





Really appreciate your helps,

Khoa Minh Nguyen
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy 1.0.0 Released

2015-04-16 Thread Mike Bayer

Hello list !

It is my great pleasure to announce that SQLAlchemy release 1.0.0 is now 
available.


Release 1.0.0 marks the tenth major SQLAlchemy series and occurs within 
the tenth year of the SQLAlchemy project overall; development started in 
2005 and the release of 0.1.0 was in February of 2006.


Calls for SQLAlchemy to go "1.0" started early on, as early as version 
0.3 (!).   However, the magnitude of the task taken on by SQLAlchemy was 
much broader than that; the development philosophy taken by the project 
is one of slowly building out a wide-reaching base of functionality, 
integrating many rounds of refactoring and rethinks over a long period 
of time and building new paradigms and features on top only as the 
foundation matures enough to support them.


Users of 1.0 have the benefit of ten years of production deployments, 
total rethinks of core APIs in early versions, a vast number of API 
additions and refinements over later versions, at least a dozen major 
internal rearchitectures, and as always a relentless focus on improving 
performance.


The SQLAlchemy project could not be what it is today without the 
unbelievable support, input, and sheer love of the user community - from 
the vast amounts of knowledge and improvements gained from tens of 
thousands of mailing list messages, to the improvements hammered out 
through over three thousand bug reports, to the amazing developers all 
around the world who have presented talks and tutorials on SQLAlchemy 
and of course the audiences who continue to attend them, to the bloggers 
and book authors supporting our community, to the tweeters sending 
gratitude our way, to our many hundreds of contributors of patches and 
pull requests, as well as financial contributors who have consistently 
supported SQLAlchemy's hosting costs, as well as more than a few 
burritos ;).


In particular, SQLAlchemy's success was made possible by its original 
developer team, and I would like to express to all of them my very deep 
gratitude for their tremendous efforts towards contributing code and 
wisdom to the project, as well as support of my work from very early on:


* Jason Kirtland
* Gaëtan de Menten
* Diana Clarke
* Michael Trier
* Philip Jenvey
* Ants Aasma
* Paul Johnston
* Jonathan Ellis

I'd also like to thank Simon King and Jonathan Vanasco for their ongoing 
contributions towards the mailing list, Alex Grönholm, creator of the 
excellent sqlacodegen [1] project, for his energetic and ubiquitous 
support of thousands of IRC users, and Sanjiv Singh, early developer of 
GeoAlchemy [2] for the awesome set of drink coasters I use every day :).


Release 1.0.0 features an array of usability enhancements, new features, 
bug fixes, and considerable performance enhancements. After five short 
beta releases, it is anticipated that the impact of upgrading from 0.9 
or even 0.8 to 1.0.0 should be minimal; however in all cases, users are 
highly encouraged to carefully read through the behavioral enhancements 
and changes documented in the 1.0 migration notes, at "What's new in 
1.0?" at http://www.sqlalchemy.org/docs/10/changelog/migration_10.html.


Changelog for 1.0.0 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_0_0

SQLAlchemy 1.0.0 is available on the download page at 
http://www.sqlalchemy.org/download.html



[1] https://pypi.python.org/pypi/sqlacodegen
[2] http://geoalchemy.org/


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] The use of SQLAlchemy for a long term project

2015-04-18 Thread Mike Bayer



On 4/17/15 6:58 PM, Van Klaveren, Brian N. wrote:

Hi,

I'm investigating the use and dependency on SQLAlchemy for a long-term 
astronomy project. Given Version 1.0 just came out, I've got a few questions 
about it.

1. It seems SQLAlchemy generally EOLs versions after about two releases/years. 
Is this an official policy? Is this to continue with version 1.0 as well? Or is 
it possible 1.0 might be a something of a long-term release.
2. While well documented and typically minimal, SQLAlchemy does have occasional 
API and behavioral changes to be aware of between versions. Is the 1.0 API more 
likely to be stable on the time frame of ~4 years?

Put another way, would you expect that it should be easier to migrate from 
version 1.0 to the 1.4 (or whatever the current version is) of SQLAlchemy in 
five years than it would be to migrate from 0.6 to 1.0 today.

I know these questions are often hard to answer with any certainty, but these 
sorts of projects typically outlive the software they are built on and are 
often underfunded as far as software maintenance goes, so we try to plan 
accordingly.

(Of course, some people just give up and through everything in VMs behind 
firewalls)
Well the vast majority of bugs that are fixed, like 99% of them, impact 
only new development, that is, they only have a positive impact someone 
who is writing new code, using new features of their database backend, 
or otherwise attempting to do something new; they typically only serve 
to raise risk and decrease stability of code that is not under active 
development and is stabilized on older versions of software.


These kinds of issues mean that some way of structuring tables, mapped 
classes, core SQL or DDL objects, ORM queries, or calls to a Session 
produce some unexpected result, but virtually always, this unexpected 
result is consistent and predictable.   An application that is sitting 
on 0.5 or 0.6 and is running perfectly fine, because it hasn't hit any 
of these issues, or quite often because it has and is working around 
them (or even relying upon their behavior) would not benefit at all from 
these kinds of fixes being backported, but would instead have a greater 
chance of hitting a regression or a change in assumptions if lots of 
bugfixes were being backported from two or three major versions forward.


So it's not like we don't backport issues three or four years back 
because it's too much trouble, it's because these backports wouldn't 
benefit anyone and they would only serve to wreak havoc with old and 
less maintained applications when some small new feature or improvement 
in behavioral consistency breaks some assumption made by that application.


As far as issues that are more appropriate for backporting, which would 
be security fixes and stability enhancements, we almost never have 
issues like that; the issues we have regarding stability, like memory 
leaks and race conditions, again typically occur in conjunction with a 
user application doing something strange and unexpected (e.g. new 
development), and as far as security issues the only issue we ever had 
like that even resembled a security issue was issue 2116 involving 
limit/offset integers not being escaped, which was backported from 0.7 
to 0.6.  Users who actually needed enterprise-level longevity who 
happened to be using for example the Red Hat package could see the 
backport for this issue backported all the way to their 0.5 and 0.3 
packages.  But presence of security/memory leak/stability issues in 
modern versions is extremely rare, and we generally only see new issues 
involving memory or stability as a result of new features (e.g. 
regressions).


There's also the class of issues that involve performance 
enhancements.   Some of these features would arguably be appropriate to 
backport more than several major versions, but again they are often the 
result of significant internal refactorings and definitely would raise 
risk for an older application not undergoing active development.   An 
older application that wants to take advantage of newer performance 
features would be better off going through the upgrade process than 
risking running on top of a library that is a hybrid of very old code 
and backported newer approaches, which will see a lot less real-world 
testing.


So short answer, the EOL you see of those old versions is generally a 
good thing as those old versions are running in old applications that 
aren't seeing lots of new development and would see a mostly negative 
effect and little to no benefit from the code continuing to change.   
SQLAlchemy is a development library so generally an application that's 
been put into production against a certain version has been well tested 
and tuned against the behaviors of that specific version.


As far as API and behavioral changes, as far as API we are really 
conservative about actually changing APIs such that an older approach 
won't work anymore.   That happened a lo

Re: [sqlalchemy] Generating Correlated Subqueries

2015-04-18 Thread Mike Bayer



On 4/18/15 7:13 PM, Michael Wilson wrote:

I have the following tables:

things_table = Table(’thing', self.metadata,
Column('id', Integer, primary_key=True),
…
)

comments_table = Table('comments', self.metadata,
Column('id', Integer, primary_key=True),  # Unique id for this comment
Column('type', Integer),  # Type of comment 
(feedback, etc)

…
)

(And the corresponding mapping).

I’m trying to construct a query like this:

clauseList  = []
clauseList.append(Look.creation >= start_date_rounded)
clauseList.append(Look.creation <= end_date)
clauseList.append(Look.like_count > 0)

clauseList.append(Comment.creation >= start_date_rounded)
clauseList.append(Comment.creation <= end_date)
clauseList.append(Comment.type == CommentTypeLike)
clauseList.append(Comment.target_id == Look.id)
condition = and_(*clauseList)

looks = session.query(Look, Comment,
func.count(Comment.type)).\
group_by(Look.id).\
order_by(func.count(Comment.type).desc()).\
filter(condition).\
offset(0).\
limit(count).\
all()

This fails with :
FROM comments, things WHERE comments.target_id = things.id AND 
comments.type = :type_1' returned no FROM clauses due to 
auto-correlation; specify correlate() to control correlation 
manually.
The “comments_table” and “things_table” declaration aren’t visible to 
the function generating the query, but even if I make them visible, 
and specify :

correlate(things, comments).\
It still fails.
How can I make this work?


by "work" we'd need to know what SQL you are going for.   The 
query(Look, Comment, func.count(Comment.type)) seems very odd because if 
you are using aggregates in your query, SQL dictates (unless you're 
using MySQL's cheater mode) that all the other columns that aren't 
aggregates need to be in the GROUP BY.   Also I don't see any subqueries 
here so nothing that would refer to correlation or produce that message, 
don't see what CommentTypeLike is, etc.




--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Generating Correlated Subqueries

2015-04-19 Thread Mike Bayer


On 4/19/15 9:36 AM, ThereMichael wrote:


Sorry, sometimes you get so deep into something you forget
everyone else isn't familiar with the problem.


As an example, here's what I'm looking for:

select things.id, count(comments.type) from things things, comments 
comments
where things.creation >= "2015-04-19" and things.creation < 
"2015-04-26" and comments.target_id = things.id

and comments.type = 5
and comments.creation >= "2015-04-19" and comments.creation < "2015-04-26"
group by things.id
order by count(comments.type) desc;


OK here is a demonstration of that SQL.  I had to include mappings to 
match, so figure out what's different in your code vs. this example to 
see where it might be going wrong.


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
import datetime

Base = declarative_base()


class Thing(Base):
__tablename__ = 'things'
id = Column(Integer, primary_key=True)
creation = Column(DateTime)


class Comment(Base):
__tablename__ = 'comments'
id = Column(Integer, primary_key=True)
type = Column(String)
target_id = Column(ForeignKey('things.id'))
creation = Column(DateTime)

s = Session()
q = s.query(Thing.id, func.count(Comment.type)).\
filter(Thing.creation >= datetime.date(2015, 4, 19)).\
filter(Thing.creation < datetime.date(2015, 4, 26)).\
filter(Comment.target_id == Thing.id).\
filter(Comment.creation >= datetime.date(2015, 4, 19)).\
filter(Comment.creation < datetime.date(2015, 4, 26)).\
group_by(Thing.id).\
order_by(func.count(Comment.type).desc())

print q

e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
q.with_session(Session(e)).all()








This gives the "desired" result of:

+--+--+
| id   | count(comments.type) |
+--+--+
| 2181 |   30 |
| 2182 |   28 |
| 2183 |   26 |
| 2184 |   24 |
| 2185 |   22 |
| 2186 |   20 |
| 2187 |   18 |
| 2188 |   16 |
| 2189 |   14 |
| 2190 |   12 |
| 2191 |   10 |
| 2192 |8 |
| 2193 |6 |
| 2194 |4 |
| 2195 |2 |
+--+--+
15 rows in set (0.00 sec)


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Generating Correlated Subqueries

2015-04-19 Thread Mike Bayer



On 4/19/15 9:56 AM, Mike Bayer wrote:


On 4/19/15 9:36 AM, ThereMichael wrote:


Sorry, sometimes you get so deep into something you forget
everyone else isn't familiar with the problem.


As an example, here's what I'm looking for:

select things.id, count(comments.type) from things things, comments 
comments
where things.creation >= "2015-04-19" and things.creation < 
"2015-04-26" and comments.target_id = things.id

and comments.type = 5
and comments.creation >= "2015-04-19" and comments.creation < 
"2015-04-26"

group by things.id
order by count(comments.type) desc;



s = Session()
q = s.query(Thing.id, func.count(Comment.type)).\
filter(Thing.creation >= datetime.date(2015, 4, 19)).\
filter(Thing.creation < datetime.date(2015, 4, 26)).\
filter(Comment.target_id == Thing.id).\
filter(Comment.creation >= datetime.date(2015, 4, 19)).\
filter(Comment.creation < datetime.date(2015, 4, 26)).\
group_by(Thing.id).\
order_by(func.count(Comment.type).desc())

one more filter for the Comment.type:

q = s.query(Thing.id, func.count(Comment.type)).\
filter(Thing.creation >= datetime.date(2015, 4, 19)).\
filter(Thing.creation < datetime.date(2015, 4, 26)).\
filter(Comment.target_id == Thing.id).\
filter(Comment.creation >= datetime.date(2015, 4, 19)).\
filter(Comment.creation < datetime.date(2015, 4, 26)).\
filter(Comment.type == 5).\
group_by(Thing.id).\
order_by(func.count(Comment.type).desc())



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Recovering from StaleDataErrors

2015-04-19 Thread Mike Bayer



On 4/19/15 5:58 PM, George Reilly wrote:

The aggregated centralized log looks like

first interleaved request, 254c1046-0da701ca, at 19:09:03, which 
switches moods from `['Grumpy', 'Tired', 'Disgruntled']` to `['Elated']`
Apr 15 19:09:03 [3843:MainThread] INFO  [254c1046-0da701ca] 
REST request: PUT http:///avatar/ received...


Second interleaved request, 711bc322-df5afba6, at 19:09:04, also 
with moods=`['Elated']`
Apr 15 19:09:04 [4572:MainThread] INFO  [711bc322-df5afba6] 
REST request: PUT http:///avatar/ received...


Second request fails
Apr 15 19:09:05 [4572:MainThread] WARNI [711bc322-df5afba6] 
REST request: PUT http:///avatar/ failed: 400 Bad 
Request (GENERAL): 'DELETE statement on table 'avatar_moods' expected 
to delete 3 row(s); Only 0 were matched.' (DELETE statement on table 
'avatar_moods' expected to delete 3 row(s); Only 0 were matched.)


First request succeeds
Apr 15 19:09:05 [3843:MainThread] INFO  [254c1046-0da701ca] 
REST response: [1570ms] PUT http:///avatar/ 
responded: 200 OK


you saw what's called a phantom read:

http://en.wikipedia.org/wiki/Isolation_%28database_systems%29#Phantom_reads

Transaction #2 in PostgreSQL now had visibility to those updated rows 
in the DB,

so it returned "0 rows deleted" to SQLAlchemy.
This confounded SA's expectations and it blew up.
I think.
sure, the ORM tries where possible to catch scenarios where what's in 
the database does not match the mutations being sent in.  Because of 
course to continue with incorrect assumptions instead of stopping 
immediately is a good way to really whack things up.





Finally, my question: What, if anything, could we have done to 
mitigate this on the server?
Well, while it is not usually used, serializable isolation will solve 
the problem of phantom reads.   You will lose concurrency because 
serialzable still has to lock things.   If you wanted to go this route, 
you could have just the methods in question use this isolation level 
individually, so that they will not have this activity against each 
other, without the rest of the application methods being impacted by 
this.   The Core has a lot of isolation level directives but also 
recently I added the ability for a Session to set an isolation level per 
individual transaction as well, see 
http://docs.sqlalchemy.org/en/rel_1_0/orm/session_transaction.html#setting-isolation-for-individual-transactions 
for that.


Another way to go is to have a method wrapped in a "retry" decorator of 
some kind, which anticipates known error conditions such as a potential 
race condition, and just retries the entire method all over again, 
starting from re-loading the current data and trying the operation again.


A more specialized form of "retry" is to keep the main transaction going 
and use a savepoint within the critical section, where you can again 
retry the operation.  Savepoints are available in the ORM via 
begin_nested(): 
http://docs.sqlalchemy.org/en/rel_1_0/orm/session_transaction.html#using-savepoint


Yet another way, is to emit a SELECT for these rows using FOR UPDATE, 
which will load the rows and lock them at the same time. You'd need to 
emit a query that will work directly against the avatar_moods table so 
that these rows are selected and set up as read-only to other 
transactions.   The various FOR UPDATE options are available at the ORM 
level using with with_for_update() method: 
http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html#sqlalchemy.orm.query.Query.with_for_update. 
PG supports a few options here where you can see an overview at 
http://www.postgresql.org/docs/9.0/static/sql-select.html#SQL-FOR-UPDATE-SHARE. 
SQLAlchemy supports FOR UPDATE, FOR SHARE, with or without the NOWAIT 
and OF options of Postgresql here.






Looking at dependency.py for 0.9.9, it looks like SQLAlchemy considers 
this

to be an unrecoverable error.
you'd probably find that Postgresql itself would forbid the transaction 
from continuing after this error in any case.





PS Congratulations on releasing SQLAlchemy 1.0! I've been a happy user 
since 2008.

--


that's great !


You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalche

Re: [sqlalchemy] Generating Correlated Subqueries

2015-04-19 Thread Mike Bayer



On 4/19/15 10:16 AM, ThereMichael wrote:

Ok, that worked perfectly!

If I want /all/ of the "Thing" object, I change it to this:

q = s.query(Thing, func.count(Comment.type)).\

filter(Thing.creation >= datetime.date(2015, 4, 19)).\

filter(Thing.creation < datetime.date(2015, 4, 26)).\

filter(Comment.target_id == Thing.id).\

filter(Comment.creation >= datetime.date(2015, 4, 19)).\

filter(Comment.creation < datetime.date(2015, 4, 26)).\

filter(Comment.type == 5).\

group_by(Thing.id).\

order_by(func.count(Comment.type).desc())

I get the original problem. Is there a way to accomplish that without 
specifying the columns by hand?


That's a restriction of SQL (unless you are using MySQL with its legacy 
settings).The bad way is just to group_by(Thing), which will group 
by all of its columns.  This is a poor performer and not considered to 
be very "correct" in SQL practice. The "better" way is to, as 
perhaps you were suggesting originally, use a subquery (though not 
correlated here); the form we'd be looking for is explained in terms of 
SQL in an old article I still like to link to here: 
http://weblogs.sqlteam.com/jeffs/archive/2005/12/14/8546.aspx. I'm 
enough of a fan of this form that it is part of the ORM tutorial in this 
example: 
http://docs.sqlalchemy.org/en/rel_1_0/orm/tutorial.html#using-subqueries 
, so you'd be looking to emulate the form seen here.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Mike Bayer



On 4/20/15 8:09 AM, Guido Winkelmann wrote:

Hi,

Have there been any non-backwards-compatible changes in SQLAlchemy 1.0 
compared to 0.9.9?
Most behavioral changes are listed out at 
http://docs.sqlalchemy.org/en/rel_1_0/changelog/migration_10.html; I've 
urged everyone to please read through this document.  None of the 
behavioral changes are "backwards incompatible" at face value, however, 
the nature of SQLAlchemy is necessarily one where there's lots of 
behaviors that applications can find themselves relying upon, and when 
we improve these behaviors, applications which relied upon bugs, 
inconsistencies, or things that just happened to work a certain way can 
break when we make things more consistent or apply definitions to 
behaviors that were previously not defined.


There have also been five beta releases put out. In particular the 
NEVER_SET issue you are receiving is a known regression that is now 
fixed, but unfortunately not enough people were interested in trying out 
any of these five beta releases in order to find this fairly common 
condition, so it is only fixed for 1.0.1.



If you compare the builds on sqlite with those MySQL/PostgreSQL, you 
will see there are two different, seemingly unrelated things going wrong:


On sqlite, drop_all() seems to fail to get the order of table drops 
right, and consequently runs into a referential integrity error.
If you can post a reproducible issue, that's what I can work with.
There are changes to how tables are sorted in the absence of foreign key 
dependency, where this ordering was previously undefined, it is now 
determinstic; see 
http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-aab332eedafc8e090f42b89ac7a67e6c. 






On MySQL/PostgreSQL, this line fails:

Apparently, sqlalchemy will use "symbol('NEVER_SET')" where the id of 
the model used for filtering should be.


this is a known regression and is fixed in 1.0.1: 
http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.0.1 



if you can confirm with current master that this is fixed I can release 
today or tomorrow as this particular regression is fairly severe.





--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Mike Bayer



On 4/20/15 8:09 AM, Guido Winkelmann wrote:

Hi,

Have there been any non-backwards-compatible changes in SQLAlchemy 1.0 
compared to 0.9.9?


We are seeing a lot of sudden breakage in our unit tests when 
switching to SQLAlchemy 1.0 from 0.9.9. Tests that worked fine before 
suddenly fail across the board.


Here's a an example of a test build that suddenly failed on 1.0:

https://travis-ci.org/pyfarm/pyfarm-master/builds/58860924

If you compare the builds on sqlite with those MySQL/PostgreSQL, you 
will see there are two different, seemingly unrelated things going wrong:


On sqlite, drop_all() seems to fail to get the order of table drops 
right, and consequently runs into a referential integrity error.


On MySQL/PostgreSQL, this line fails:

association = TaskTaskLogAssociation.query.filter_by(task=task, 
log=task_log, attempt=attempt).first()


In this context, "log" is a relationship in the 
model TaskTaskLogAssociation to model TaskLog. "task_log" is an object 
of type TaskLog, but one that has never been written to the database 
and has no set id. That leads to this error message in the logs:


nose.proxy.ProgrammingError: (psycopg2.ProgrammingError) function 
symbol(unknown) does not exist

LINE 3: ...72015052936_task_log_associations.attempt = 1 AND symbol('NE...
   ^
HINT:  No function matches the given name and argument types. You 
might need to add explicit type casts.
 [SQL: 'SELECT test29172015052936_task_log_associations.task_log_id AS 
test29172015052936_task_log_associations_task_log_id, 
test29172015052936_task_log_associations.task_id AS 
test29172015052936_task_log_associations_task_id, 
test29172015052936_task_log_associations.attempt AS 
test29172015052936_task_log_associations_attempt, 
test29172015052936_task_log_associations.state AS 
test29172015052936_task_log_associations_state \nFROM 
test29172015052936_task_log_associations \nWHERE 
test29172015052936_task_log_associations.attempt = %(attempt_1)s AND 
%(param_1)s = test29172015052936_task_log_associations.task_log_id AND 
%(param_2)s = test29172015052936_task_log_associations.task_id \n 
LIMIT %(param_3)s'] [parameters: {'param_1': symbol('NEVER_SET'), 
'attempt_1': 1, 'param_2': 1, 'param_3': 1}]


Apparently, sqlalchemy will use "symbol('NEVER_SET')" where the id of 
the model used for filtering should be.


It may be a bit questionable to filter by a model that doesn't even 
exist in the database, but, again, this used to work fine in 0.9.9.


This is odd. What was "working fine" in 0.9.9 doing exactly?   Was it 
coming out with "NULL = 
test29172015052936_task_log_associations.task_log_id" ?  Looking in 
0.9, there is no logic in this case to convert the "=" to "IS" in this 
case as the parameter from the object is not evaluated til after the 
query is generated.This query will *always* return False, because 
NULL cannot be compared with "=".


I guess that's whats desired here, that the query returns nothing, but 
this is a lot like the idea of "x IN ()", e.g. it's useless to emit this 
query, and it relies upon kind of a weird quirk of SQL. I almost wonder 
if this should emit a warning.  Because if we do eventually make it so 
that IS NULL comes out, the results can change for more complex 
relationships that explicitly want to compare some columns to NULL.













Regards,
  Guido W.
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Mike Bayer



On 4/20/15 12:56 PM, Guido Winkelmann wrote:
I just tested, the problem is still present in the current master 
(bd61e7a3287079cf742f4df698bfe3628c090522 from github). Guido W. 


can you please try current master at least as of 
a3af638e1a95d42075e25e874746, thanks.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-21 Thread Mike Bayer



On 4/21/15 6:45 AM, Guido Winkelmann wrote:

On Monday 20 April 2015 21:57:40 Oliver Palmer wrote:
[...]

So I got to thinking about what we're doing differently with sqlite and
this bit of code comes to mind:


# sqlite specific configuration for development
if db.engine.name == "sqlite":
@event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON")
cursor.execute("PRAGMA synchronous=OFF")
cursor.execute("PRAGMA journal_mode=MEMORY")
cursor.close()


If I comment the above out in our application.py
 module the second nosetests
example above works without issues.

This looks to me like you are "fixing" the problem by just not enabling
foreign key support in sqlite.  Since the problem was a foreign key violation,
telling sqlite to not bother enforcing those will make it so we don't see the
problem in the tests anymore, but it doesn't fix whatever is going on here...
what is needed here is actual logging of the tables as they are being 
dropped.  The claim here is that the ordering of the tables is wrong in 
1.0.0.  So can we please see the full list of DROP statements logged for 
both the 0.9.9 version and the 1.0.0 version?   then we will confirm 
that, then we look at the table metadata to see that it should in fact 
enforce the 0.9.9 ordering, then we've confirmed that's the issue and 
can produce a reproduction case.







Guido W.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can I restrict the set of mappers that configure_mappers() works on?

2015-04-21 Thread Mike Bayer



On 4/21/15 9:31 AM, Steven Winfield wrote:

Hi,

It seems like configuration is attempted for all new mappers, 
globally, whenever a query is done. So if library A and B both use 
sqlalchemy, and A imports B before A's mappers can be properly 
initialised (e.g. there is a relationship("ClassnameAsString") call 
somewhere that can't be resolved yet), and B does something to trigger 
mapper configuration, then it will fail.
This occurs even if A and B make separate calls to declarative_base(), 
even with explicitly different metadata and bound engines.


no there's not, and the short answer is that libraries shouldn't be 
triggering mapper configuration (and definitely not doing ORM queries) 
at import time, and/or the imports of A and B should be organized such 
that B imports fully before A starts doing things. Either these 
libraries have inter-dependencies, in which case this implies mapper 
configuration should be across all of the mappings in both, or they 
don't, in which case the imports of A and B should not be from within 
each other.


An enhancement that would limit configuration to groups of mappings is a 
feasible proposal but we don't have that right now. Wouldn't be that 
easy to do without adding a performance penalty since the check for "new 
mappers" would have to be limited to some categorization, meaning 
lookups in critical sections.









Here's a boiled-down version of the problem that I've been playing 
with, which shows that the relationship between Parent and Child is 
configured when a query on Test is done - even though it may be part 
of a different library and in a different database:


|
from sqlalchemy import Column, Integer, Text, ForeignKey, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship
import traceback

Base1 = declarative_base()

class Test(Base1):
__tablename__ = "test"
id = Column(Integer, primary_key=True)

Base2 = declarative_base()

class Parent(Base2):
__tablename__ = "parent"
id = Column(Integer, primary_key=True)

def deferred_parent():
traceback.print_stack()
return Parent

class Child(Base2):
__tablename__ = "child"
id_parent = Column(Integer, ForeignKey(Parent.id), primary_key=True)
name = Column(Text, primary_key=True)
parent = relationship(deferred_parent)

engine = create_engine('sqlite://')
Session = sessionmaker(bind=engine)
session = Session()
try:
session.query(Test).all()
except:
pass

|

...the important bit of the traceback being:

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\session.py", 
line 1165, in query


return self._query_cls(entities, self, **kwargs)

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py", 
line 108, in __init__


self._set_entities(entities)

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py", 
line 118, in _set_entities


self._set_entity_selectables(self._entities)

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py", 
line 151, in _set_entity_selectables


ent.setup_entity(*d[entity])

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py", 
line 2997, in setup_entity


self._with_polymorphic = ext_info.with_polymorphic_mappers

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\util\langhelpers.py", 
line 726, in __get__


obj.__dict__[self.__name__] = result = self.fget(obj)

File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\mapper.py", 
line 1871, in _with_polymorphic_mappers


configure_mappers()

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\mapper.py", 
line 2583, in configure_mappers


mapper._post_configure_properties()

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\mapper.py", 
line 1688, in _post_configure_properties


prop.init()

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\interfaces.py", 
line 144, in init


self.do_init()

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\relationships.py", 
line 1549, in do_init


self._process_dependent_arguments()

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\relationships.py", 
line 1605, in _process_dependent_arguments


self.target = self.mapper.mapped_table

  File 
"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\util\langhelpers.py", 
line 726, in __get__


o

Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-21 Thread Mike Bayer



On 4/21/15 11:19 AM, Guido Winkelmann wrote:

On Tuesday 21 April 2015 09:43:51 Mike Bayer wrote:

On 4/21/15 6:45 AM, Guido Winkelmann wrote:

On Monday 20 April 2015 21:57:40 Oliver Palmer wrote:
[...]


So I got to thinking about what we're doing differently with sqlite and

this bit of code comes to mind:
 # sqlite specific configuration for development
 
 if db.engine.name == "sqlite":

 @event.listens_for(Engine, "connect")
 
 def set_sqlite_pragma(dbapi_connection, connection_record):

 cursor = dbapi_connection.cursor()
 cursor.execute("PRAGMA foreign_keys=ON")
 cursor.execute("PRAGMA synchronous=OFF")
 cursor.execute("PRAGMA journal_mode=MEMORY")
 cursor.close()

If I comment the above out in our application.py
<https://github.com/pyfarm/pyfarm-master/blob/f22912cd7d89b93c146801fd1575
ff0 6f4883724/pyfarm/master/application.py#L208> module the second
nosetests example above works without issues.

This looks to me like you are "fixing" the problem by just not enabling
foreign key support in sqlite.  Since the problem was a foreign key
violation, telling sqlite to not bother enforcing those will make it so we
don't see the problem in the tests anymore, but it doesn't fix whatever is
going on here...

what is needed here is actual logging of the tables as they are being
dropped.  The claim here is that the ordering of the tables is wrong in
1.0.0.  So can we please see the full list of DROP statements logged for
both the 0.9.9 version and the 1.0.0 version?

Can you help me with that?  I don't know how to make SQLAlchemy log all its
DROP statements.
logging is through the standard Python logging system, or alternatively 
the echo=True flag on create_engine() as a shortcut, but for a travis 
build I'd imagine that logging and stdout might already be routed around.


http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html?highlight=logging#configuring-logging







Guido W.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can I restrict the set of mappers that configure_mappers() works on?

2015-04-21 Thread Mike Bayer



On 4/21/15 12:25 PM, Steven Winfield wrote:
OK, thanks for the quick answer - I guess I shouldn't be using 
sqlalchemy for the importer then, since this necessarily has to 
perform queries in order for scripts to be imported.


what's that about ?   some kind of dynamic scripting environment? If 
you're using a special kind of importer, there should still be ways to 
make it work, because at least there is some top-level control still 
being exercised over the loading of these scripts.





I might attempt restricting mapper configuration to a group of tables 
- I think the performance penalty would be quite small, as the 
categorization tests only need to be done when new mappers need 
configuring (Mapper._new_mappers == True) and should only occur once 
per table, but I'll see if this is the case.


On Tuesday, April 21, 2015 at 4:19:12 PM UTC+1, Michael Bayer wrote:



On 4/21/15 9:31 AM, Steven Winfield wrote:

Hi,

It seems like configuration is attempted for all new mappers,
globally, whenever a query is done. So if library A and B both
use sqlalchemy, and A imports B before A's mappers can be
properly initialised (e.g. there is a
relationship("ClassnameAsString") call somewhere that can't be
resolved yet), and B does something to trigger mapper
configuration, then it will fail.
This occurs even if A and B make separate calls to
declarative_base(), even with explicitly different metadata and
bound engines.


no there's not, and the short answer is that libraries shouldn't
be triggering mapper configuration (and definitely not doing ORM
queries) at import time, and/or the imports of A and B should be
organized such that B imports fully before A starts doing
things.   Either these libraries have inter-dependencies, in which
case this implies mapper configuration should be across all of the
mappings in both, or they don't, in which case the imports of A
and B should not be from within each other.

An enhancement that would limit configuration to groups of
mappings is a feasible proposal but we don't have that right
now. Wouldn't be that easy to do without adding a performance
penalty since the check for "new mappers" would have to be limited
to some categorization, meaning lookups in critical sections.








Here's a boiled-down version of the problem that I've been
playing with, which shows that the relationship between Parent
and Child is configured when a query on Test is done - even
though it may be part of a different library and in a different
database:

|
from sqlalchemy import Column, Integer, Text, ForeignKey,
create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship
import traceback

Base1 = declarative_base()

class Test(Base1):
__tablename__ = "test"
id = Column(Integer, primary_key=True)

Base2 = declarative_base()

class Parent(Base2):
__tablename__ = "parent"
id = Column(Integer, primary_key=True)

def deferred_parent():
traceback.print_stack()
return Parent

class Child(Base2):
__tablename__ = "child"
id_parent = Column(Integer, ForeignKey(Parent.id),
primary_key=True)
name = Column(Text, primary_key=True)
parent = relationship(deferred_parent)

engine = create_engine('sqlite://')
Session = sessionmaker(bind=engine)
session = Session()
try:
session.query(Test).all()
except:
pass

|

...the important bit of the traceback being:

  File

"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\session.py",
line 1165, in query

return self._query_cls(entities, self, **kwargs)

  File

"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py",
line 108, in __init__

self._set_entities(entities)

  File

"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py",
line 118, in _set_entities

self._set_entity_selectables(self._entities)

  File

"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py",
line 151, in _set_entity_selectables

ent.setup_entity(*d[entity])

  File

"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py",
line 2997, in setup_entity

self._with_polymorphic = ext_info.with_polymorphic_mappers

  File

"R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\util\langhelpers.py",
line 726, in __get__

obj.__dict__[self.__name__] = result = self.fget(obj)

File

"R:\sw\external\20150407-0\python27\lib\site-pack

Re: [sqlalchemy] pandas.DataFrame.to_sql method: how to speed up exporting to Microsoft SQL Server (6 minutes for 11 MB!)

2015-04-21 Thread Mike Bayer



On 4/21/15 1:47 PM, John Doe wrote:
I am using pandas 0.16 and sqlalchemy 0.99. Yes, I know I should 
probably upgrade, but I don't have admin rights on my PC.

I understand the pandas.DataFrame.to_sql() method relies on sqlalchemy.

I have a pandas dataframe with ca 155,000 rows and 12 columns. If I 
export it to csv with dataframe.to_csv , the output is an 11MB file 
(which is produced instantly).


If, however, I export to a Microsoft SQL Server with the to_sql 
method, it takes between 5 and 6 minutes!


Reading the same table from SQL to Python with the 
pandas.read_sql_table takes 2 seconds.


No columns are text: only int, float, bool and dates. Also, there are 
no constraints on the table. I have seen cases where ODBC drivers set 
nvarchar(max) and this slows down the data transfer, but it cannot be 
the case here.


Any suggestions on how to speed up the export process?

I appreciate that SQLAlchemy will probably never be the fastest method 
to insert data, but 6 minutes for 11 MBs of data is too slow...


Unfortunately I don't work with Pandas so this is dependent on how 
Pandas is doing their queries here.


If you can at least set echo=True on the engine here, you'd see what 
queries are being emitted.   Whether they are emitting a handful of 
queries, or thousands, makes a difference, as well as if these queries 
are returning vast numbers of rows due to cartesian products or similar 
makes a difference, and how they are fetching rows back makes a difference.


It would be helpful to everyone if either you or someone on the Pandas 
development team could walk through the steps detailed at 
http://docs.sqlalchemy.org/en/rel_1_0/faq/performance.html#how-can-i-profile-a-sqlalchemy-powered-application 
to isolate where the performance issue is originating.





Thanks!


My code looks like this:

|importpandas aspd fromsqlalchemy 
importcreate_engine,MetaData,Table,select 
ServerName="myserver"Database="mydatabase"TableName="mytable"engine 
=create_engine('mssql+pyodbc://'+ServerName+'/'+Database)conn 
=engine.connect()metadata 
=MetaData(conn)my_data_frame.to_sql(TableName,engine)|


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] pandas.DataFrame.to_sql method: how to speed up exporting to Microsoft SQL Server (6 minutes for 11 MB!)

2015-04-22 Thread Mike Bayer



On 4/22/15 5:45 AM, John Doe wrote:
Is there a way to output all the SQL statements into a file? I don't 
mean a logging file with other information, I mean a file with 
**only** SQL statements, which could ideally be run as it is from 
within my database client.

Python logging will get you there:


import logging

logger = logging.getLogger("sqlalchemy.engine")
logger.setLevel(logging.INFO)   # use DEBUG to get result sets also
handler = logging.FileHandler("myfile.txt")
logger.addHandler(handler)

from sqlalchemy import create_engine
e = create_engine("sqlite://")
e.execute("select 1")





I tried setting echo=True, and I also tried this: 
http://stackoverflow.com/questions/6350411/how-to-retrieve-executed-sql-code-from-sqlalchemy


but I can't spot anything out of the ordinary.

I profiled the code, and saw that 99% of the time is spent in the 
do_executemany method of sqlalchemy\engine\default.py , which arguably 
doesn't reveal much per se unless I understand better how 
pandas.DataFrame.to_sql calls sqlachemy
This is the main thing you'd want to see when data is being inserted 
into the database and indicates the most efficient route is being taken.


But reading that surprised me.

I carefully re-read your first email, and now I see, that you are trying 
to write *TO* the database, yes?And you're comparing the speed of 
writing *TO* to the speed of reading *FROM*?


Unfortunately, selecting data from some tables compared to inserting 
data in those tables are two completely different activities.   It is 
not at all surprising to see a large difference in speed. Particularly 
if the table has a lot of indexing on it, or we're putting in very large 
data values, inserts can be slow.  It also varies much by database 
backend and the configuration and environment of those databases.  SQL 
Server can be a very fast database but it is not always that easy to tune.


99% of the time spent in do_execute() actually means most of the time is 
spent outside of SQLAlchemy, within the database driver inserting those 
rows.


The echo output will show you more though bulk INSERT statements will be 
very long in the logfile because we log a segment of the parameters as well.




--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-22 Thread Mike Bayer



On 4/22/15 12:45 AM, Oliver Palmer wrote:
We're using a Flask extension to work with sqlalchemy called 
flask-sqlalchemy.  The engine is usually not directly exposed but echo 
can be enabled using a configuration var 
<https://github.com/pyfarm/pyfarm-master/commit/5d0abc03273f0fcce3c7d2cf44ef8981dd31aa41> which should 
have the same impact in terms of logging the sql statements.  This 
time around I only ran the tests for one module because we hit 
Travis's 4MB log limit in a couple of seconds when running all the 
tests at once.  I can run the full test suite locally and upload that 
log output somewhere if you need it.


Otherwise, here's the failures when using 
3e80d628bd133d0fd0687e35b8d13abd1d31d6df (search for 'IntegrityError'):


https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494875/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494876/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494877/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494878/log.txt

https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494880/log.txt



OK can you please share:

1. What is the system by which these table names are being generated, as 
they appear to be dynamic?Are Table objects being copied, are there 
events within the execution process, or how is that working?


2. Have you tried running this test suite locally?   Even just this one 
test?  Does it pass?   Can you please set the environment variable 
"PYTHONHASHSEED=random" (see 
https://docs.python.org/2/using/cmdline.html#envvar-PYTHONHASHSEED for 
use) and then run this single test in a local environment repeatedly?



3. Share the mapping and/or Table metadata that is used within the 
drop_all() for these tables:


CREATE TABLE test14222015041441_users (
id INTEGER NOT NULL,
active BOOLEAN,
username VARCHAR(254) NOT NULL,
password VARCHAR(64),
email VARCHAR(255),
expiration DATETIME,
onetime_code VARCHAR(64),
last_login DATETIME,
PRIMARY KEY (id),
CHECK (active IN (0, 1)),
UNIQUE (username),
UNIQUE (email)
)


CREATE TABLE test14222015041441_user_roles (
user_id INTEGER,
role_id INTEGER,
FOREIGN KEY(user_id) REFERENCES test14222015041441_users (id),
FOREIGN KEY(role_id) REFERENCES test14222015041441_roles (id)
)


CREATE TABLE test14222015041441_roles (
id INTEGER NOT NULL,
active BOOLEAN,
name VARCHAR(128) NOT NULL,
expiration DATETIME,
description TEXT,
PRIMARY KEY (id),
CHECK (active IN (0, 1)),
UNIQUE (name)
)



4. Can you confirm that these tables are in fact defined with MetaData() 
and Table(), and are **not** using "autoload=True" in any way?   This 
includes no use of automap, SQLSoup, etc.


5. Is this the same test that fails every time (or at least when things 
fail)?  Or do different tests of a similar nature fail randomly?







But, it also passed once too (like the test I did yesterday):

https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494879/log.txt


All of the above are from this Travis 
job: https://travis-ci.org/pyfarm/pyfarm-master/builds/59494874


And with sqlalchemy 0.9.9.  Same tests and logging configuration just 
consistently passing compared to the above:


https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494606/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494607/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494609/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494610/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494611/log.txt


Here's the original Travis job for those: 
https://travis-ci.org/pyfarm/pyfarm-master/builds/59494605




By the way, thanks a bunch helping us taking a look at this.


On Tuesday, April 21, 2015 at 11:42:30 AM UTC-4, Michael Bayer wrote:



On 4/21/15 11:19 AM, Guido Winkelmann wrote:
> On Tuesday 21 April 2015 09:43:51 Mike Bayer wrote:
>> On 4/21/15 6:45 AM, Guido Winkelmann wrote:
>>> On Monday 20 April 2015 21:57:40 Oliver Palmer wrote:
>>> [...]
>>>
>>>> So I got to thinking about what we're doing differently with
sqlite and
>>>>
>>>> this bit of code comes to mind:
>>>>  # sqlite specific configuration for development
>>>>
>>>>  if db.engine.name <http://db.engine.name> == "sqlite":
>>>>  @event.listens_for(Engine, "connect")
>>>>
>>>>  def set_sqlite_pragma(dbapi_connection,
connection_record):
>>>>  cursor = dbapi_connection.cursor()
>>>>  cursor.execute("PRAGMA foreign_keys=ON")
>>>>  cursor.execute("P

Re: [sqlalchemy] SQLAlchemy as plain-SQL generator

2015-04-22 Thread Mike Bayer



On 4/22/15 8:27 AM, Richard Gerd Kuesters wrote:

hi,

you must use a specific dialect so sqlalchemy can create it for you. 
not the best usage, imho, but here it goes:


*stmt = query.compile().process(query, literal_binds=True)*


OK since Pavel noted the double compile, if you want the literal_binds 
part (not clear if that is needed here?), it goes into compile()


stmt = str(query.compile(compile_kwargs={"literal_binds": True}))







i don't know if you want it formated or what, if so, 
*sqlparse*provides a good way to do it.


but, again, this will only work if you BIND an engine to your 
metadata, so sqlalchemy can write the right sql for you. also, your 
query should be a *select()*object, like your example.


this worked for me for some time, using SA 0.9, i don't know about 
1.0, but it should work too.



cheers,
richard.


On 04/22/2015 04:26 AM, Pavel S wrote:

Hello,

I have pythonic application which imports custom module written in 
C++ using boost::python.

The module creates database connection(s) and executes queries.
The python calls various methods on that module and passes plain SQL 
into them. Results are then returned to python.


I would like to use SQLALchemy Core to generate plain SQL strings 
which will be passed the module.


Lets say I want to generate the following string:

|
SELECT A,B FROM TblWHERE C ="Foo";
|

The equivalent in SA is:

|
query =select([
Tbl.c.A,
Tbl.c.B
]).where(
Tbl.c.C =="Foo"
)
|

How can I produce the string from that query?
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SAWarning: Unicode type received non-unicode bind param value

2015-04-22 Thread Mike Bayer



On 4/22/15 3:48 AM, Pavel S wrote:

Hi,

it happened to me many times during development, mainly when used 
custom column types, that I passed wrong type of value to the query.


Then the the following warning was emitted:|

||
SAWarning: Unicodetype received non-unicode bindparam value
|
The problem with such warning is it does not say
1) were the problem occurred (the line number points to somewhere in 
sqlalchemy and not to my application)

Use the warnings filter to turn these into exceptions:

import warnings
warnings.simplefilter("error")


2) what was the value which caused the problem ( repr() of that value 
would be nice to have)
This was not possible for a long time because when you emit a warning of 
a specific message, the Python interpreter stores it *permanently* in a 
hidden collection - this is how Python knows not to emit the warning a 
second time.   Which means if our warning emitted the actual string, and 
the program proceeded through millions of values, you would get millions 
of warnings, these warnings would be all stored permanently, and your 
application would crash from out of memory fairly quickly (if not just 
the burden of outputting millions of warnings).


In 1.0 we have come up with a novel approach such that we display just 
the first 10 warnings of this nature, illustrating the string value.


I'd say upgrade to 1.0 however don't go into production with it until I 
get 1.0.1 out, as we had a bunch of regressions not located during the 
beta period.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SAWarning: Unicode type received non-unicode bind param value

2015-04-22 Thread Mike Bayer



On 4/22/15 5:47 AM, Pavel S wrote:

If I turn warnings into errors, the problem is still the same:

1) where the error occured as the code path in traceback is not 
pointing to the place where select() has been bound with values
that would suggest your code has some less typical form where you are 
creating a select() object in some distant place, storing it, and then 
invoking it in some other place entirely?   If so, you'd have to track 
that.   Bound parameter values aren't touched until execution time.


Turning on SQL logging will illustrate the statement being sent and you 
can use some basic intuition to point it to what code is causing it.






2) you don't know what was the value







Dne středa 22. dubna 2015 10:09:51 UTC+2 SElsner napsal(a):

Hey,

you dan do:

  import warnings
  warnings.simplefilter('|error|')

This will raise an exception. and give you a stacktrace on where
the Unicode warnign happened.

On 04/22/2015 09:48 AM, Pavel S wrote:

Hi,

it happened to me many times during development, mainly when used
custom column types, that I passed wrong type of value to the query.

Then the the following warning was emitted:|

||
SAWarning: Unicodetype received non-unicode bindparam value
|
The problem with such warning is it does not say
1) were the problem occurred (the line number points to somewhere
in sqlalchemy and not to my application)
2) what was the value which caused the problem ( repr() of that
value would be nice to have)

I always had to hack sqlalchemy/sql/sqltypes.py and add print
value before the warning is emitted to actually see what was wrong.

Is there any convenient way how to solve such issue?

-- 
You received this message because you are subscribed to the

Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+...@googlegroups.com .
To post to this group, send email to sqlal...@googlegroups.com
.
Visit this group at http://groups.google.com/group/sqlalchemy
.
For more options, visit https://groups.google.com/d/optout
.


-- 


_*check out pointcloud9.com*_ 

**Sebastian Elsner - Pipeline Techincal Director - RISE*

t: +49 30 20180300 _sebast...@risefx.com _ 
f: +49 30 61651074 _www.risefx.com_ *

*RISE FX GmbH*
*Schlesische Strasse 28, 10997 Berlin
An der Schanz 1A, 50735 Köln
Büchsenstraße 20, 70174 Stuttgart
Gumpendorferstrasse 55, 1060 Wien
Geschaeftsfuehrer: Sven Pannicke, Robert Pinnow
Handelsregister Berlin HRB 106667 B*

--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] pandas.DataFrame.to_sql method: how to speed up exporting to Microsoft SQL Server (6 minutes for 11 MB!)

2015-04-22 Thread Mike Bayer



On 4/22/15 10:11 AM, John Doe wrote:



On Wednesday, April 22, 2015 at 2:46:19 PM UTC+1, Michael Bayer wrote:



I carefully re-read your first email, and now I see, that you are
trying
to write *TO* the database, yes?And you're comparing the speed of
writing *TO* to the speed of reading *FROM*?

Unfortunately, selecting data from some tables compared to inserting
data in those tables are two completely different activities.   It is
not at all surprising to see a large difference in speed. 



Of course. I do not expect that reading from will be as fast as 
writing to the database.
However, 6 minutes to write 11MBs of data is ridiculous, because this 
is the time it takes even when:
1) there are no text columns (so the nvarchjar(max) issue I mentioned 
and I saw in other cases cannot apply)

2) there are only floating-point numbers
3) there are no primary keys, no indices and no constraints of any kind

I trust you'll concur it's hard to justify such a long time to 
transfer such a small table even in the examples above.
I am not a DBA, but I cannot think of other reasons that would cause 
such terrible speeds. Any thoughts would be more than welcome!


Maybe the to_sql method is committing to the database after inserting 
every single row and this slows things down? Even so, 6 minutes for 
11MBs means 31KB per second. My internet connection in the '90s was 
faster than that :)


a COMMIT every time would slow it down, yes, but then in your profiling 
you'd see the do_commit() method taking up that time as well.


if you can get your logging going, you'll see what SQL its emitting, and 
you can also set up the formatting to have timestamps so you can measure 
the time spent between queries.


Also, if you can, try out the pymssql driver, instead of pyodbc. pymssql 
is much better maintained these days.







I have also been very surprised by the lack of info on the net. Either 
I am the only one using the to_sql method, or I am the only one who 
finds it so slow!



--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-22 Thread Mike Bayer


On 4/22/15 12:45 AM, Oliver Palmer wrote:
We're using a Flask extension to work with sqlalchemy called 
flask-sqlalchemy.  The engine is usually not directly exposed but echo 
can be enabled using a configuration var 
<https://github.com/pyfarm/pyfarm-master/commit/5d0abc03273f0fcce3c7d2cf44ef8981dd31aa41> which should 
have the same impact in terms of logging the sql statements.  This 
time around I only ran the tests for one module because we hit 
Travis's 4MB log limit in a couple of seconds when running all the 
tests at once.  I can run the full test suite locally and upload that 
log output somewhere if you need it.



oddly enough, using just the schema you have this issue seems to 
reproduce.  On SQLite, and not Postgresql. Which is completely weird 
because the sorting of tables has nothing to do with the backend.


always a mystery how something that looks about as possible as all the 
buildings being upside down one day, will become something very simple.







Otherwise, here's the failures when using 
3e80d628bd133d0fd0687e35b8d13abd1d31d6df (search for 'IntegrityError'):


https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494875/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494876/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494877/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494878/log.txt

https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494880/log.txt


But, it also passed once too (like the test I did yesterday):

https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494879/log.txt


All of the above are from this Travis 
job: https://travis-ci.org/pyfarm/pyfarm-master/builds/59494874


And with sqlalchemy 0.9.9.  Same tests and logging configuration just 
consistently passing compared to the above:


https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494606/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494607/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494609/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494610/log.txt
https://s3.amazonaws.com/archive.travis-ci.org/jobs/59494611/log.txt


Here's the original Travis job for those: 
https://travis-ci.org/pyfarm/pyfarm-master/builds/59494605




By the way, thanks a bunch helping us taking a look at this.


On Tuesday, April 21, 2015 at 11:42:30 AM UTC-4, Michael Bayer wrote:



On 4/21/15 11:19 AM, Guido Winkelmann wrote:
> On Tuesday 21 April 2015 09:43:51 Mike Bayer wrote:
>> On 4/21/15 6:45 AM, Guido Winkelmann wrote:
>>> On Monday 20 April 2015 21:57:40 Oliver Palmer wrote:
>>> [...]
>>>
>>>> So I got to thinking about what we're doing differently with
sqlite and
>>>>
>>>> this bit of code comes to mind:
>>>>  # sqlite specific configuration for development
>>>>
>>>>  if db.engine.name <http://db.engine.name> == "sqlite":
>>>>  @event.listens_for(Engine, "connect")
>>>>
>>>>  def set_sqlite_pragma(dbapi_connection,
connection_record):
>>>>  cursor = dbapi_connection.cursor()
>>>>  cursor.execute("PRAGMA foreign_keys=ON")
>>>>  cursor.execute("PRAGMA synchronous=OFF")
>>>>  cursor.execute("PRAGMA journal_mode=MEMORY")
>>>>  cursor.close()
>>>>
>>>> If I comment the above out in our application.py
>>>>
<https://github.com/pyfarm/pyfarm-master/blob/f22912cd7d89b93c146801fd1575
<https://github.com/pyfarm/pyfarm-master/blob/f22912cd7d89b93c146801fd1575>

>>>> ff0 6f4883724/pyfarm/master/application.py#L208> module the
second
>>>> nosetests example above works without issues.
>>> This looks to me like you are "fixing" the problem by just not
enabling
>>> foreign key support in sqlite.  Since the problem was a
foreign key
>>> violation, telling sqlite to not bother enforcing those will
make it so we
>>> don't see the problem in the tests anymore, but it doesn't fix
whatever is
>>> going on here...
>> what is needed here is actual logging of the tables as they are
being
>> dropped.  The claim here is that the ordering of the tables is
wrong in
>> 1.0.0.  So can we please see the full list of DROP statements
logged for
>> both the 0.9.9 version and the 1.0.0 version?
> Can you help me with that?  I don't know how to make SQLAlchemy

Re: [sqlalchemy] pandas.DataFrame.to_sql method: how to speed up exporting to Microsoft SQL Server (6 minutes for 11 MB!)

2015-04-22 Thread Mike Bayer



On 4/22/15 12:07 PM, John Doe wrote:



On Wednesday, April 22, 2015 at 3:20:13 PM UTC+1, Michael Bayer wrote:


a COMMIT every time would slow it down, yes, but then in your
profiling you'd see the do_commit() method taking up that time as
well.

if you can get your logging going, you'll see what SQL its
emitting, and you can also set up the formatting to have
timestamps so you can measure the time spent between queries.

Also, if you can, try out the pymssql driver, instead of pyodbc. 
pymssql is much better maintained these days.




Will try that, thanks.

In the meanwhile, I have tried your suggestions. The to_sql method 
creates a single insert statement and then commits at the end. I can't 
see anything which would justify such a low speed. The mistery remains!


If I can abuse your patience a bit more, is there a way to:
1) display the entire SQL statements created? The method you suggested 
shows only 10 records, and adds the text "displaying 10 of 1000 total 
bound parameter sets" when run on a 1000-record table


use an event listener and do the logging, printing, and/or debugging 
that you want.  I recommend before_cursor_execute(): 
http://docs.sqlalchemy.org/en/rel_1_0/core/events.html?highlight=before_cursor_execute#sqlalchemy.events.ConnectionEvents.before_cursor_execute



2) generate the SQL statements but without connecting to the server? 
If I can get a text file with the insert statement, then I can run 
that from within my database client, and it will be faster


Stick with #1.   This can be more challenging because the script needs 
to only emit statements that don't expect results.  There is a tool 
called the "mock strategy" that will do this which has a short example 
at 
http://docs.sqlalchemy.org/en/rel_1_0/faq/metadata_schema.html#how-can-i-get-the-create-table-drop-table-output-as-a-string. 










I have also been very surprised by the lack of info on the net.
Either I am the only one using the to_sql method, or I am the
only one who finds it so slow!


-- 
You received this message because you are subscribed to the

Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+...@googlegroups.com .
To post to this group, send email to sqlal...@googlegroups.com
.
Visit this group at http://groups.google.com/group/sqlalchemy
.
For more options, visit https://groups.google.com/d/optout
.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-22 Thread Mike Bayer



On 4/22/15 12:45 AM, Oliver Palmer wrote:
We're using a Flask extension to work with sqlalchemy called 
flask-sqlalchemy.  The engine is usually not directly exposed but echo 
can be enabled using a configuration var 
 which should 
have the same impact in terms of logging the sql statements.  This 
time around I only ran the tests for one module because we hit 
Travis's 4MB log limit in a couple of seconds when running all the 
tests at once.  I can run the full test suite locally and upload that 
log output somewhere if you need it.


Otherwise, here's the failures when using 
3e80d628bd133d0fd0687e35b8d13abd1d31d6df (search for 'IntegrityError'):


can you please try version 39978060b0d81bd470aade97e608.

All issues should be resolved at this point, please let me know ASAP, 
thanks!



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy 1.0.1 Released

2015-04-23 Thread Mike Bayer

SQLAlchemy release 1.0.1 is now available.

This is a quick bug-fix release that repairs several new regressions 
identified in the 1.0.0 release, not found during the beta period. All 
users of 1.0.0 are encouraged to upgrade to 1.0.1.


Key elements of this release include fixes regarding the NEVER_SET 
symbol leaking into queries in some cases, fixes to SQLite when using 
DDL in conjunction with referential integrity enabled, a fix to the 
EXISTS construct which primarily impacts queries that use special 
datatypes, and repairs to the Firebird dialect regarding the new 
LIMIT/OFFSET features.


In order to accommodate some of these fixes, there are three additional 
behavioral changes in 1.0.1; a new warning is emitted when using DDL 
with SQLite in conjunction with mutually-dependent foreign keys (e.g. a 
reference cycle), a new warning is emitted when running ORM relationship 
comparisons when the target object contains the value None for any of 
the Python-side column values, and a change is made regarding which data 
values are used within a relationship comparison that uses the != 
operator, in order to make the behavior consistent with that of the == 
operator as used in the same context. The migration notes contains 
updates for all three of these changes, and they are each linked 
directly from the changelog which should be carefully reviewed.


Changelog for 1.0.1 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_0_1

SQLAlchemy 1.0.1 is available on the download page at:

http://www.sqlalchemy.org/download.html

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Access __tablename__ in server_defaults?

2015-04-23 Thread Mike Bayer
if you are using __tablename__ what happens if you just refer to 
cls.__tablename__ in that method ?




On 4/23/15 3:46 PM, Jacob Magnusson wrote:
Would it somehow be possible to access the __tablename__ in 
server_default? What I'm looking for is something like this:


|class PositionMixin(object): @declared_attr def position(cls): return 
Column(Integer, 
server_default=text("nextval('%(__tablename__)s_id_seq')")) |

​

--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Add information to reflected tables

2015-04-23 Thread Mike Bayer



On 4/23/15 5:15 PM, Jeffrey Yan wrote:
I have a couple of tables that I want to reflect. The first is a data 
table where one of the columns is a foreign key to the second table.


If I used SQLAlchemy declarative_base, a query might look something 
like this:


|
session.query(Client.name,Suburb.label).join(Suburb)# In the Client 
class there is an attribute suburb_id = Column(Integer, 
ForeignKey(Suburb.id))

|
/
/
However, this foreign key is not specified in the schema (we're using 
postgres 9.2) but we know all the columns that look like something_id 
are foreign keys, so I've been defining them that way using SQLAlchemy.



My problem is, although we have a fixed number of property tables 
(suburb, country, join_date, ...) - each data table (per client) can 
have a different set of columns.


This hasn't been much of a problem so far, since we only have a few 
/types/ of client data tables, so the combinations have been limited. 
However, I'd like to cater for changes in the future.


If I reflect the table using SQLAlchemy, the resultant table will not 
have the ForeignKey columns compared to if I did it manually. Is there 
a way to add these in after reflection?
you can do this by instrumenting the process of the Table being built 
up.It's weird enough that I had to write a demo to verify it works, 
so here it is:


from sqlalchemy import create_engine
from sqlalchemy import event
from sqlalchemy import MetaData, ForeignKey, Column
import re

e = create_engine("postgresql://scott:tiger@localhost/test")

conn = e.connect()
trans = conn.begin()

conn.execute("""
create table foo (id integer primary key)
""")

conn.execute("""
create table bar (id integer primary key, foo_id integer)
""")


metadata = MetaData()


@event.listens_for(Column, "before_parent_attach")
def associate_fk(column, table):
# if you want to limit the event's scope; a good idea
# else this will take place for Column objects everywhere
if table.metadata is not metadata:
return

m = re.match(r"(.+)_id", column.name)
if m:
tname = m.group(1)
column.append_foreign_key(ForeignKey('%s.id' % tname))

metadata.reflect(conn)


foo = metadata.tables['foo']
bar = metadata.tables['bar']

assert bar.c.foo_id.references(foo.c.id)











Or is my only option to use reflected tables and explicit join 
conditions? Something like:


|
client_table_1 
=Table('client_table_1',metadata,autoload=True,autoload_with=engine,schema='client_1')
session.query(client_table_1.c.name,Suburb.label).join(Suburb,client_table_1.c.suburb_id 
==Suburb.id)# Explicit joins only from now on

|


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: H2 database

2015-04-24 Thread Mike Bayer



On 4/24/15 12:06 PM, Jonathan Vanasco wrote:

SqlAlchemy needs 2 things for a database work:

1. A python database driver
2. A SqlAlchemy dialect (that tells SqlAlchemy how to write sql for 
the driver)


So...

1. H2 doesn't seem to have any Python drivers or other support.  I 
just came across an old forum post that talked about possibly using 
Postgresql client since they supported similar protocols, but that's 
about all I could find.


2. There are a handful of posts and articles on writing custom 
dialects.  Several recent ones as well.


If you're just trying to handle the dialect, and their syntax is 
reasonably similar to an existing dialect... I think you could do it 
alone.


If you have to write the general Python support as well though, that's 
a bigger task.

plus, there's a README for general info on new dialects!

https://github.com/zzzeek/sqlalchemy/blob/master/README.dialects.rst







--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Making a new type out of Postgres DATERANGE + Psycopg2 DateRange

2015-04-24 Thread Mike Bayer



On 4/24/15 9:09 AM, Dimitris Theodorou wrote:

Hi,

I am using psycopg2 and trying to put together a new daterange type 
that combines the following:


1. A custom daterange class which provides various useful helpers 
(backwards/forwards iteration, intersection).
2. The psycopg2 DateRange that fills in with some helpers of its own 
(contains, comparisons) and takes care of marshaling the object 
from/to the db
3. The DATERANGE sqlalchemy/postgres type to take care of emiting of 
SQL operators and DDL, and so that I can use it both when querying and 
when declaring Columns()


So my naive approach is the following, going by the guideline at 
http://docs.sqlalchemy.org/en/latest/core/compiler.html#subclassing-guidelines:


|
|
|frompsycopg2.extras importDateRange
fromsqlalchemy.dialects.postgresql importDATERANGE

classMyDateRange(TypeDecorator,DateRange):

  impl =DATERANGE

defintersection():
#...
|
|

|


OK well the type object we make here represents the type, not the value, 
so psycopg2's DateRange would not be part of the class declaration.


SQLAlchemy's DATERANGE type was written to integrate with psycopg2 
pretty much, does do a lot of operations, including overlaps, 
containment, etc.  the operators are here: 
http://docs.sqlalchemy.org/en/rel_1_0/dialects/postgresql.html#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory 
, if you're just looking to add new comparison operators to this class, 
you'd follow the guidelines at 
http://docs.sqlalchemy.org/en/rel_1_0/core/custom_types.html#redefining-and-creating-new-operators. 



Also, always let us know what "crashes" means, e.g. stack trace, error 
message, so that we know what it's doing and we can provide more 
information.








|
|This crashes when emitting a table.create() statement.

When I look at the documentation on how to create/modify types I am 
left dumfounded and I realize I would need a significant trial and 
error investment to figure out how things work. Does anyone else have 
experience with such an attempt? I'd rather not  become an expert in 
sqlalchemy type extension before I can make this work.


Thanks,
Dimitris
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Preserve mapped entities after wrapping the query

2015-04-24 Thread Mike Bayer



On 4/24/15 5:25 PM, Пайков Юрий wrote:

q = session.query(Recipe, func.avg(Recipe.field1).over(...)).join(...)
I have a query which selects some mapped entity as well as other columns.
I then refer to the name of that entity when working with the result 
of the query:

for entry in q.all():
recipe=entry.Recipe
  
Now, I want to add filtering by some calculated criteria to my query, 
and so I wrap it in an additional query:

q = q.subquery();
q = session.query(q).filter(q.c.avg_1 > 10 )
However, this way I can no longer access entry.Recipe! Is there a way 
to make sqlalchemy adapt names? I tried aliased and 
select_entity_from, but no luck :(


this is getting into less reliable stuff, but instead of subquery() -> 
session.query(q), use the from_self() method.  It's designed to work 
this way, and your Recipe entity will be adapted into the subquery.


I've observed that the vast majority of my users don't seem to get into 
queries like these, so from_self() is not as popular (or widely tested) 
as it should be, but it is at the base of a lot of widely used functions 
like count() and subquery eager loading, so give it a try:


http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html?highlight=from_self#sqlalchemy.orm.query.Query.from_self

apparently it needs some documentation too :)



--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy 1.0.2 Released

2015-04-24 Thread Mike Bayer

SQLAlchemy release 1.0.2 is now available.

As additional regressions are still being reported, we would like to get 
these changes out as fast as possible so that early adopters are working 
on a 1.0 version that is most representative of what its final form will 
be. 1.0.2 includes two additional fixes, one of which was particularly 
thorny in that it was already "fixed" in 1.0.0b4, but was not fixed 
correctly, regarding the placement of plain label names in GROUP BY 
expressions; so the incorrect fix is reverted and the original 
regression from 0.9 is resolved as well.


Changelog for 1.0.2 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_0_2

SQLAlchemy 1.0.2 is available on the download page:

http://www.sqlalchemy.org/download.html

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Declarative setup failing on upgrade to 1.0.1

2015-04-24 Thread Mike Bayer
give 1.0.2 a try since we adjusted some things regarding 
__declare_first__ and __declare_last__.   Further than that it depends a 
lot on what your basic "Base" setup looks like, mixins in use, 
extensions like AbstractConcreteBase, stuff like that.  Any details you 
can share would help.





On 4/24/15 7:02 PM, Bill Schindler wrote:
I'm trying to upgrade from SA 0.9.8 to 1.0.1 and getting a traceback. 
I'm not sure what's going on here, but the declarative setup is 
obviously not happy with something. (On 0.9.8, everything runs fine, 
so I've obviously run afoul of something new/different/fixed.)


  File 
"/opt/certwise-lcs/eggs/lcs.content.user-1.0.2dev_r10-py2.7.egg/lcs/content/user/makeorm.py", 
line 89, in make_orm

class Principals(Base):
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.1-py2.7-linux-x86_64.egg/sqlalchemy/ext/declarative/api.py", 
line 55, in __init__

_as_declarative(cls, classname, cls.__dict__)
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.1-py2.7-linux-x86_64.egg/sqlalchemy/ext/declarative/base.py", 
line 87, in _as_declarative

_MapperConfig.setup_mapping(cls, classname, dict_)
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.1-py2.7-linux-x86_64.egg/sqlalchemy/ext/declarative/base.py", 
line 102, in setup_mapping

cfg_cls(cls_, classname, dict_)
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.1-py2.7-linux-x86_64.egg/sqlalchemy/ext/declarative/base.py", 
line 134, in __init__

self._early_mapping()
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.1-py2.7-linux-x86_64.egg/sqlalchemy/ext/declarative/base.py", 
line 137, in _early_mapping

self.map()
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.1-py2.7-linux-x86_64.egg/sqlalchemy/ext/declarative/base.py", 
line 530, in map

del mp_.class_manager.info['declared_attr_reg']
AttributeError: 'NoneType' object has no attribute 'class_manager'

This gets fired off on every ORM class, so I'm guessing the cause is 
somewhere deeper in our code. Any thoughts on what I might look for to 
find the cause?


--
Bill
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Preserve mapped entities after wrapping the query

2015-04-25 Thread Mike Bayer



On 4/25/15 6:05 AM, Юрий Пайков wrote:

Ok, I seemed to figure out how to deal with it -
|
row_number_column = func.row_number().over(
partition_by=Recipe.id
).label('row_number')
 query = query.add_column(
row_number_column
)
query = query.from_self().filter(row_number_column == 1)
|

Using an explicit column construct


OK, more like your original though you can still put the window function 
on the inside, and refer to it on the outside:


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
field = Column(Integer)


s = Session()

avg = func.avg(A).over(partition_by=A.id).label('avg')   # 'avg' label 
is optional, will be auto-labeled anyway


q = s.query(A, avg).from_self().filter(avg > 10)

print(q)

in the output, we can see that referring to "avg" the from_self() picks 
up on this and adapts it to the inner query:


SELECT anon_1.a_id AS anon_1_a_id, anon_1.a_field AS anon_1_a_field, 
anon_1.avg AS anon_1_avg
FROM (SELECT a.id AS a_id, a.field AS a_field, avg(:avg_1) OVER 
(PARTITION BY a.id) AS avg

FROM a) AS anon_1
WHERE anon_1.avg > :param_1





суббота, 25 апреля 2015 г., 11:39:08 UTC+5 пользователь Юрий Пайков 
написал:


Michael, thank you for you reply, I expected you to mention
from_self :) I know about it, it is a handy trick indeed
But I deliberately don't use it, because this way I don't know how
to mention a column which I want to filter on
This is due to the fact, that it is calculated i.e. there is no
table to refer to!  I might resert to using
literals("filter('avg_1>10')"), but 'd prefer to stay in the more
ORM-style


суббота, 25 апреля 2015 г., 2:37:11 UTC+5 пользователь Michael
Bayer написал:



On 4/24/15 5:25 PM, Пайков Юрий wrote:

q = session.query(Recipe,
func.avg(Recipe.field1).over(...)).join(...)
I have a query which selects some mapped entity as well as
other columns.
I then refer to the name of that entity when working with the
result of the query:
for entry in q.all():
  recipe=entry.Recipe
  
Now, I want to add filtering by some calculated criteria to
my query, and so I wrap it in an additional query:
q = q.subquery();
q = session.query(q).filter(q.c.avg_1 > 10 )
However, this way I can no longer access entry.Recipe! Is
there a way to make sqlalchemy adapt names? I tried aliased
and select_entity_from, but no luck :(


this is getting into less reliable stuff, but instead of
subquery() -> session.query(q), use the from_self() method. 
It's designed to work this way, and your Recipe entity will be

adapted into the subquery.

I've observed that the vast majority of my users don't seem to
get into queries like these, so from_self() is not as popular
(or widely tested) as it should be, but it is at the base of a
lot of widely used functions like count() and subquery eager
loading, so give it a try:


http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html?highlight=from_self#sqlalchemy.orm.query.Query.from_self



apparently it needs some documentation too :)


-- 
You received this message because you are subscribed to the

Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to sqlalchemy+...@googlegroups.com.
To post to this group, send email to sqlal...@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy
.
For more options, visit https://groups.google.com/d/optout
.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How can i use LIKE with an association proxy?

2015-04-27 Thread Mike Bayer



On 4/27/15 5:56 AM, Adrian wrote:
In my user have I have an association proxy so I can access all email 
addresses of the user via User.all_emails.
For a simple exact search I simply 
.filter(User.all_emails.contains('f...@example.com')).


Is it also possible to use e.g. a LIKE match (besides manually joining 
the Emails table and using Email.email.like(...))?


the has() / any()  operators can allow this:


User.all_emails.any(Email.email.like('%foo%'))


it will produce an EXISTS subquery expression, which is not as efficient 
in SQL as a regular JOIN.






--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] when do I have to do a rollback so the app keep working fine

2015-04-27 Thread Mike Bayer



On 4/27/15 7:40 AM, David Allouche wrote:

On 18 Apr 2015, at 02:20, dcgh...@gmail.com wrote:

Hi everyone, I have a considerably amount of code that uses SQLAlchemy and I 
want to implement a decorator that captures the SQLAlchemy exceptions, then 
make session.rollback() and recall the decorated function, so I don't have to 
write the try except statement whenever I use SQLAlchemy.
For implementing such a decorator I need the exceptions I can certainly capture 
to make session.rollback() and the app keep working fine because there are 
exceptions that will cause an endless loop and should never be captured (e.g., 
generating the same primary key due to a bug and always raising IntegrityError)
So, can anyone tell me what are those exceptions that are safe to make 
session.rollback()?

I guess that your intent is to retry transactions that failed because of a 
serialisation error in the SERIALIZABLE isolation level.

My understanding is that, to SQLAlchemy, this is a database-specific issue. You 
can expect the SQLAlchemy exception to be an OperationalError, but you would 
need to add additional checks to specifically identify the kind error returned 
by your database driver. An argument could be made that serialisation errors 
should be wrapped in a more specific exception class by SQLAlchemy, but I do 
not believe that is the case at the moment.

I am no expert, so please someone correct me if I am wrong.


that's pretty much the current situation - OperationalError refers to 
"something went wrong with the connection" and IntegrityError means 
"something went wrong with the data the query is attempting to modify".


In Openstack we have an elaborate system of catching those exceptions we 
care about across many backends; this is probably more than you need but 
this is sort of what is needed: 
https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/exc_filters.py



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How can i use LIKE with an association proxy?

2015-04-27 Thread Mike Bayer



On 4/27/15 1:52 PM, Adrian wrote:

That's the first thing I've tried. Unfortunately it doesn't work...

---> 1 
User.find_all(User.all_emails.any(UserEmail.email.like('%adrian%')))


wait, what is UserEmail, that's the association.   This would be the 
endpoint class.


Can you share all three classes and the important parts of their 
mappings please ?




/home/adrian/dev/indico/env/lib/python2.7/site-packages/sqlalchemy/ext/associationproxy.pyc 
in any(self, criterion, **kwargs)

367 """
368
--> 369 if self._value_is_scalar:
370 value_expr = getattr(
371 self.target_class, 
self.value_attr).has(criterion, **kwargs)


/home/adrian/dev/indico/env/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.pyc 
in __get__(self, obj, cls)

723 if obj is None:
724 return self
--> 725 obj.__dict__[self.__name__] = result = self.fget(obj)
726 return result
727

/home/adrian/dev/indico/env/lib/python2.7/site-packages/sqlalchemy/ext/associationproxy.pyc 
in _value_is_scalar(self)

231 def _value_is_scalar(self):
232 return not self._get_property().\
--> 233 mapper.get_property(self.value_attr).uselist
234
235 @util.memoized_property

AttributeError: 'ColumnProperty' object has no attribute 'uselist'


My relationship and association proxy are defined like this:

_all_emails = db.relationship(
'UserEmail',
lazy=True,
viewonly=True,
primaryjoin='User.id == UserEmail.user_id',
collection_class=set,
backref=db.backref('user', lazy=False)
)



On Monday, April 27, 2015 at 5:28:49 PM UTC+2, Michael Bayer wrote:


the has() / any()  operators can allow this:


User.all_emails.any(Email.email.like('%foo%'))


it will produce an EXISTS subquery expression, which is not as
efficient in SQL as a regular JOIN.




-- 
You received this message because you are subscribed to the

Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+...@googlegroups.com .
To post to this group, send email to sqlal...@googlegroups.com
.
Visit this group at http://groups.google.com/group/sqlalchemy
.
For more options, visit https://groups.google.com/d/optout
.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] before_create receives a list of tuples in the `tables` kwarg since 1.0

2015-04-27 Thread Mike Bayer



On 4/27/15 2:09 PM, Adrian wrote:
I just tried updating from 0.9.9 to 1.0.2 and noticed that this code 
is now broken ("tuple object has no attribute schema"):


def _before_create(target, connection, **kw):
schemas = {table.schema for table in kw['tables']}
for schema in schemas:
CreateSchema(schema).execute_if(callable_=_should_create_schema).execute(connection)

listen(db.Model.metadata, 'before_create', _before_create)



Is this change intentional? I couldn't find anything about it in the 
1.0 changelog.


OK, I think I should change this back because that was not intentional, 
so it's a regression, and I've added 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3391/tables-collection-passed-to-before-after. 
Though I looked at the docstring and it does give me some room to just 
make changes:


" additional keyword arguments relevant to the event. The contents of 
this dictionary may vary across releases, and include the list of tables 
being generated for a metadata-level event, the checkfirst flag, and 
other elements used by internal events."


But the "tables" list here is actually there just for the purpose of 
end-user event listening so that should probably be maintained as it was.




--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] before_create receives a list of tuples in the `tables` kwarg since 1.0

2015-04-27 Thread Mike Bayer



On 4/27/15 2:43 PM, Mike Bayer wrote:



On 4/27/15 2:09 PM, Adrian wrote:
I just tried updating from 0.9.9 to 1.0.2 and noticed that this code 
is now broken ("tuple object has no attribute schema"):


def _before_create(target, connection, **kw):
schemas = {table.schema for table in kw['tables']}
for schema in schemas:
CreateSchema(schema).execute_if(callable_=_should_create_schema).execute(connection)

listen(db.Model.metadata, 'before_create', _before_create)



Is this change intentional? I couldn't find anything about it in the 
1.0 changelog.


OK, I think I should change this back because that was not 
intentional, so it's a regression, and I've added 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3391/tables-collection-passed-to-before-after. 
Though I looked at the docstring and it does give me some room to just 
make changes:


" additional keyword arguments relevant to the event. The contents of 
this dictionary may vary across releases, and include the list of 
tables being generated for a metadata-level event, the checkfirst 
flag, and other elements used by internal events."


But the "tables" list here is actually there just for the purpose of 
end-user event listening so that should probably be maintained as it was.


that is now fixed in 
https://bitbucket.org/zzzeek/sqlalchemy/commits/e25ef01fbb70







--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com 
<mailto:sqlalchemy@googlegroups.com>.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] polymorphic objects

2015-04-27 Thread Mike Bayer



On 4/27/15 4:31 PM, Richard Gerd Kuesters wrote:
well, i'm having trouble dealing with polymorphic objects. i mean, the 
functionality is fine, i just don't know how to obtain the "main" object.


let me be clear: i have A, which is my main object, and is inherited 
by B and C. I would like to work with the A object, even though it's 
polymorphic identity refers to B or C. I know it's not the default 
behavior, but can I obtain A from a query?


perhaps you could be more specific.

If a query returns B and C objects, if those inherit from A, they *are* 
A objects.






thanks a lot!
richard.
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Automap not reflecting tables in Postgres schemas

2015-04-27 Thread Mike Bayer



On 4/27/15 4:29 PM, Sam Zhang wrote:

Hello,

I'm following the documentation for reflecting database tables using 
`automap`: 
http://docs.sqlalchemy.org/en/latest/orm/extensions/automap.html#generating-mappings-from-an-existing-metadata.


When I don't specific a schema, and Postgres uses the default `public` 
schema, this works as expected, and I find the names of my tables:


>>> m = MetaData()
>>> b = automap_base(bind=engine, metadata=m)
>>> b.prepare(engine, reflect=True)
>>> b.classes.keys()
['ads', 'spatial_ref_sys', 'income']

But when I specific an explicit schema, I don't have access to the 
tables in `Base.classes` anymore.


>>> m = MetaData(schema='geography')
>>> b = automap_base(bind=engine, metadata=m)
>>> b.prepare(engine, reflect=True)
>>> b.classes.keys()
[]

The MetaData reflected correctly though:

>>> b.metadata.tables
immutabledict({geography.usa_cbsa_centroids': 
Table('usa_cbsa_centroids', 
MetaData(bind=Engine(postgresql://asteroids:***@localhost:5432/asteroids)), 
Column('GEOID', VARCHAR(length=5), table=
sa_cbsa_centroids>, nullable=False), ...})

Note that the tables and columns are only known at runtime.
Here's a demo that works for me.  Does it work for you?Do all your 
tables have primary keys defined?



from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, MetaData


engine = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
engine.execute("""
create table if not exists test_schema.user (
id serial primary key, name varchar(30)
)
""")
engine.execute("""
create table if not exists test_schema.address (
id serial primary key,
email_address varchar(30),
user_id integer references test_schema.user(id)
)
""")

m = MetaData(schema="test_schema")

Base = automap_base(bind=engine, metadata=m)

# reflect the tables
Base.prepare(engine, reflect=True)

assert Base.classes.keys() == ['user', 'address']

User = Base.classes.user
Address = Base.classes.address


session = Session(engine)

session.add(Address(email_address="f...@bar.com", user=User(name="foo")))
session.commit()

u1 = session.query(User).first()
print(u1.address_collection)








Any thoughts?

This is duplicated 
from http://stackoverflow.com/questions/29905160/automap-reflect-tables-within-a-postgres-schema-with-sqlalchemy, 
feel free to answer there as well.


Thanks,
Sam
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Automap not reflecting tables in Postgres schemas

2015-04-28 Thread Mike Bayer



On 4/28/15 3:02 PM, Sam Zhang wrote:
Thanks Michael! it was the lack of a primary key. I see references to 
it now that I know what to look for
- a very interesting 
explanation: http://docs.sqlalchemy.org/en/latest/faq/ormconfiguration.html#how-do-i-map-a-table-that-has-no-primary-key

- 
http://stackoverflow.com/questions/23765681/sqlalchemy-automap-does-not-create-class-for-tables-without-primary-key

It looks like there's no mention of this requirement in the automap 
documentation page 
though: http://docs.sqlalchemy.org/en/latest/orm/extensions/automap.html. 
I'd be happy to add a brief note about it and submit a pull request if 
you'd like.


sure thing!




Sam

On Monday, April 27, 2015 at 6:54:13 PM UTC-4, Michael Bayer wrote:



On 4/27/15 4:29 PM, Sam Zhang wrote:

Hello,

I'm following the documentation for reflecting database tables
using `automap`:

http://docs.sqlalchemy.org/en/latest/orm/extensions/automap.html#generating-mappings-from-an-existing-metadata

.

When I don't specific a schema, and Postgres uses the default
`public` schema, this works as expected, and I find the names of
my tables:

>>> m = MetaData()
>>> b = automap_base(bind=engine, metadata=m)
>>> b.prepare(engine, reflect=True)
>>> b.classes.keys()
['ads', 'spatial_ref_sys', 'income']

But when I specific an explicit schema, I don't have access to
the tables in `Base.classes` anymore.

>>> m = MetaData(schema='geography')
>>> b = automap_base(bind=engine, metadata=m)
>>> b.prepare(engine, reflect=True)
>>> b.classes.keys()
[]

The MetaData reflected correctly though:

>>> b.metadata.tables
immutabledict({geography.usa_cbsa_centroids':
Table('usa_cbsa_centroids',
MetaData(bind=Engine(postgresql://asteroids:***@localhost:5432/asteroids)),
Column('GEOID', VARCHAR(length=5), table=, nullable=False), ...})

Note that the tables and columns are only known at runtime.

Here's a demo that works for me.  Does it work for you?Do all
your tables have primary keys defined?


from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, MetaData


engine = create_engine("postgresql://scott:tiger@localhost/test",
echo=True)
engine.execute("""
create table if not exists test_schema.user (
id serial primary key, name varchar(30)
)
""")
engine.execute("""
create table if not exists test_schema.address (
id serial primary key,
email_address varchar(30),
user_id integer references test_schema.user(id)
)
""")

m = MetaData(schema="test_schema")

Base = automap_base(bind=engine, metadata=m)

# reflect the tables
Base.prepare(engine, reflect=True)

assert Base.classes.keys() == ['user', 'address']

User = Base.classes.user
Address = Base.classes.address


session = Session(engine)

session.add(Address(email_address="f...@bar.com" ,
user=User(name="foo")))
session.commit()

u1 = session.query(User).first()
print(u1.address_collection)








Any thoughts?

This is duplicated from

http://stackoverflow.com/questions/29905160/automap-reflect-tables-within-a-postgres-schema-with-sqlalchemy

,
feel free to answer there as well.

Thanks,
Sam
-- 
You received this message because you are subscribed to the

Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+...@googlegroups.com .
To post to this group, send email to sqlal...@googlegroups.com
.
Visit this group at http://groups.google.com/group/sqlalchemy
.
For more options, visit https://groups.google.com/d/optout
.


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to

Re: [sqlalchemy] How can i use LIKE with an association proxy?

2015-04-28 Thread Mike Bayer


First off, this is unusual because most people would just use the 
primary attribute, association proxy isn't buying you anything here:


s.query(User).filter(User._all_emails.any(UserEmail.email.like('foo')))

vs.

s.query(User).filter(User.all_emails.any(UserEmail.email.like('foo')))

same amount of typing!

but anyway, sure it's a bug, this is 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3397/association-proxy-any-on-o2m-non-object 
fixed in 4f6e9ccae93b9c50298b04135.




On 4/28/15 1:26 PM, Adrian wrote:
Ugh, somehow my reply sent by email nerver arrived here... here's my 
code: https://gist.github.com/ThiefMaster/40cd1f91e2a792150496

--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] [ORM] Fetching columns that were assigned a SQL expression as part of INSERT...RETURNING and UPDATE...RETURNING

2015-04-28 Thread Mike Bayer

use the eager_defaults flag:


http://docs.sqlalchemy.org/en/rel_1_0/orm/mapping_api.html?highlight=eager_defaults#sqlalchemy.orm.mapper.params.eager_defaults

http://docs.sqlalchemy.org/en/rel_0_9/changelog/migration_09.html#orm-can-efficiently-fetch-just-generated-insert-update-defaults-using-returning


not sure if this works for the ad-hoc set up updated_at, try it out.   
if it's a server_default on the Column, should definitely work.





On 4/28/15 7:35 PM, univerio wrote:

Suppose I have a PostgreSQL backend and I have the following class:

class Foo(Base):
id = Column(Integer, primary_key=True)
updated_at = Column(DateTime)

and I do

foo = Foo(updated_at=func.now())
session.add(foo)
session.flush()
foo.id  # this is already loaded, no additional query emitted
foo.updated_at  # this is not loaded, will cause an additional 
query to be emitted


Is it possible to have the SQLAlchemy ORM fetch the actual value of 
updated_at as part of the INSERT...RETURNING statement like it does 
for id, instead of leaving it unloaded and having to issue a second 
query when I access it?



Jack
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Why is an association object mark as "dirty" instead of "deleted" when removed?

2015-04-28 Thread Mike Bayer



On 4/28/15 6:57 PM, st...@canary.md wrote:

Hi,

Background information: I am trying to implement functionality similar 
to the history_meta.py example 
(http://docs.sqlalchemy.org/en/rel_0_9/_modules/examples/versioned_history/history_meta.html). 
I am listening for after_flush events and create an audit record and 
am having problems with association objects. Here is an example:


class User(Auditable, self.Base, ComparableEntity):
__tablename__ = 'usertable'
id = Column(Integer, primary_key=True)
name = Column(String)
keywords = association_proxy('assocs', 'keyword')

class Keyword(Auditable, self.Base, ComparableEntity):
__tablename__ = 'keywordtable'
id = Column(Integer, primary_key=True)
word = Column(String)

class UserKeyword(Auditable, self.Base, ComparableEntity):
__tablename__ = 'userkeywordtable'
user_id = Column(Integer, ForeignKey("usertable.id"),
   primary_key=True)
keyword_id = Column(Integer, ForeignKey("keywordtable.id"),
  primary_key=True)
user = relationship(User,
  backref=backref("assocs",
  cascade="all, delete-orphan"))
keyword = relationship(Keyword)
def __init__(self, keyword=None, user=None):
self.user = user
self.keyword = keyword


apple = Keyword(word='apple')
pear = Keyword(word='pear')
bob = User(name='bob')
bob.keywords = [apple, pear]
sess.add(bob)
sess.commit()

bob.keywords.remove(apple)   <== this is when my question is about
sess.commit()
When we remove the keyword, it marks the UserKeyword association 
object is "dirty" instead of "deleted". Why is that? Since the row is 
being removed, I would expect it to be marked as "deleted", so that I 
could make an audit record indicating it was deleted.


does the row actually get deleted?  the calculation of "orphan" isn't 
done until flush time, because theoretically you could be associating 
the UserKeyword to another User.


it doesn't look like the versioned rows recipe has support for this use 
case right now.  You could force the up-front delete using a "remove" 
attribute event on that collection.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Automap not reflecting tables in Postgres schemas

2015-04-28 Thread Mike Bayer



On 4/28/15 10:08 PM, Sam Zhang wrote:
Just hit the issue tracker with this, and the two snags I encountered 
so far doing this. Hopefully it's not because I missed some glaring 
instructions about how to build the documentation?
there's no instructions right now.  it's a sphinx build, plus the things 
that are in requirements.txt.   the themes and plugins are all extremely 
custom so you can't modify the themes or anything like that.





On Tuesday, April 28, 2015 at 3:46:11 PM UTC-4, Michael Bayer wrote:



On 4/28/15 3:02 PM, Sam Zhang wrote:

Thanks Michael! it was the lack of a primary key. I see
references to it now that I know what to look for
- a very interesting explanation:

http://docs.sqlalchemy.org/en/latest/faq/ormconfiguration.html#how-do-i-map-a-table-that-has-no-primary-key


-

http://stackoverflow.com/questions/23765681/sqlalchemy-automap-does-not-create-class-for-tables-without-primary-key



It looks like there's no mention of this requirement in the
automap documentation page though:
http://docs.sqlalchemy.org/en/latest/orm/extensions/automap.html
.
I'd be happy to add a brief note about it and submit a pull
request if you'd like.


sure thing!




Sam

On Monday, April 27, 2015 at 6:54:13 PM UTC-4, Michael Bayer wrote:



On 4/27/15 4:29 PM, Sam Zhang wrote:

Hello,

I'm following the documentation for reflecting database
tables using `automap`:

http://docs.sqlalchemy.org/en/latest/orm/extensions/automap.html#generating-mappings-from-an-existing-metadata

.

When I don't specific a schema, and Postgres uses the
default `public` schema, this works as expected, and I find
the names of my tables:

>>> m = MetaData()
>>> b = automap_base(bind=engine, metadata=m)
>>> b.prepare(engine, reflect=True)
>>> b.classes.keys()
['ads', 'spatial_ref_sys', 'income']

But when I specific an explicit schema, I don't have access
to the tables in `Base.classes` anymore.

>>> m = MetaData(schema='geography')
>>> b = automap_base(bind=engine, metadata=m)
>>> b.prepare(engine, reflect=True)
>>> b.classes.keys()
[]

The MetaData reflected correctly though:

>>> b.metadata.tables
immutabledict({geography.usa_cbsa_centroids':
Table('usa_cbsa_centroids',

MetaData(bind=Engine(postgresql://asteroids:***@localhost:5432/asteroids)),
Column('GEOID', VARCHAR(length=5), table=, nullable=False), ...})

Note that the tables and columns are only known at runtime.

Here's a demo that works for me.  Does it work for you?Do
all your tables have primary keys defined?


from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, MetaData


engine =
create_engine("postgresql://scott:tiger@localhost/test",
echo=True)
engine.execute("""
create table if not exists test_schema.user (
id serial primary key, name varchar(30)
)
""")
engine.execute("""
create table if not exists test_schema.address (
id serial primary key,
email_address varchar(30),
user_id integer references test_schema.user(id)
)
""")

m = MetaData(schema="test_schema")

Base = automap_base(bind=engine, metadata=m)

# reflect the tables
Base.prepare(engine, reflect=True)

assert Base.classes.keys() == ['user', 'address']

User = Base.classes.user
Address = Base.classes.address


session = Session(engine)

session.add(Address(email_address="f...@bar.com",
user=User(name="foo")))
session.commit()

u1 = session.query(User).first()
print(u1.address_collection)








Any thoughts?

This is duplicated from

http://stackoverflow.com/questions/29905160/automap-reflect-tables-within-a-postgres-schema-with-sqlalchemy,
feel free to answer there as well.

Thanks,
Sam
-- 
You received this message because you are subscribed to the

Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to sqlalchemy+...@googlegroups.com.
  

Re: [sqlalchemy] column_descriptions on recursive query : AttributeError: 'CTE' object has no attribute 'entity'

2015-04-29 Thread Mike Bayer



On 4/29/15 5:16 AM, g wrote:

Hi all
I have a query  like that
from  sqlalchemy.orm  import  aliased

class  Part(Base):
 __tablename__  =  'part'
 part  =  Column(String,  primary_key=True)
 sub_part  =  Column(String,  primary_key=True)
 quantity  =  Column(Integer)

included_parts  =  session.query(
 Part.sub_part,
 Part.part,
 Part.quantity).\
 filter(Part.part=="our part").\
 cte(name="included_parts",  recursive=True)

incl_alias  =  aliased(included_parts,  name="pr")
parts_alias  =  aliased(Part,  name="p")
included_parts  =  included_parts.union_all(
 session.query(
 parts_alias.sub_part,
 parts_alias.part,
 parts_alias.quantity).\
 filter(parts_alias.part==incl_alias.c.sub_part)
 )

q  =  session.query(
 included_parts.c.sub_part,
 func.sum(included_parts.c.quantity).
 label('total_quantity')
 ).\
 group_by(included_parts.c.sub_part)

in sqlalchemy 1.0 we do:

q.column_descriptions  to get the query columns

result is *AttributeError: 'CTE' object has no attribute 'entity'*


hello -

I cannot reproduce your error, either by calling upon 
q.column_descriptions or by actually invoking the query against a 
database, using either release 1.0.2 or current master.


Can you please:

1. provide a real stack trace?

2. run the sample below and confirm that it works, then modify it to 
show your error.  thanks!



from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import aliased

Base = declarative_base()


class Part(Base):
__tablename__ = 'part'
part = Column(String, primary_key=True)
sub_part = Column(String, primary_key=True)
quantity = Column(Integer)

e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
Base.metadata.create_all(e)

session = Session(e)

included_parts = session.query(
Part.sub_part,
Part.part,
Part.quantity).\
filter(Part.part=="our part").\
cte(name="included_parts", recursive=True)

incl_alias = aliased(included_parts, name="pr")
parts_alias = aliased(Part, name="p")
included_parts = included_parts.union_all(
session.query(
parts_alias.sub_part,
parts_alias.part,
parts_alias.quantity).\
filter(parts_alias.part==incl_alias.c.sub_part)
)



q = session.query(
included_parts.c.sub_part,
func.sum(included_parts.c.quantity).
label('total_quantity')
).\
group_by(included_parts.c.sub_part)

q.all()

print q.column_descriptions






*
*
*Note:*
This  was working with release 0.9.9

Any idea how to fix it ?


Cheers g
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SA 1.0.2 throwing TypeError: Boolean value of this clause is not defined

2015-04-29 Thread Mike Bayer



On 4/29/15 9:25 AM, Bill Schindler wrote:
This error is being thrown on code that worked with 0.9.8. It seems to 
be checking a comparison on something, but I can't figure out which 
"this clause" the exception is referring to. Here's the stripped-down 
code leading up to the commit:


ancient = utcnow() - timedelta(hours=8)
ancient_conn = (
(LiveSession.isconnected) &
(LiveSession.connected < ancient))
for conn in session.query(LiveSession).filter(ancient_conn):
conn.isconnected = False
conn.disconnected = func.now()
session.commit()

I've tried it without the loop and it fails about 75% of the time with 
the same traceback. I'm also getting this exception on another ORM 
object that has a string column and four timestamp columns (and isn't 
updated in a loop). Test updating that object also gives me the same 
exception about 75% of the time on flush.


Neither ORM object has any relationships or anything other than 
straight column definitions.


  File 
"/opt/certwise-lcs/eggs/lcs.web.events-1.0.0-py2.7.egg/lcs/web/events/utility.py", 
line 296, in _dead

session.commit()
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", 
line 790, in commit

self.transaction.commit()
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", 
line 392, in commit

self._prepare_impl()
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", 
line 372, in _prepare_impl

self.session.flush()
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", 
line 2004, in flush

self._flush(objects)
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", 
line 2122, in _flush

transaction.rollback(_capture_exception=True)
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/util/langhelpers.py", 
line 60, in __exit__

compat.reraise(exc_type, exc_value, exc_tb)
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", 
line 2086, in _flush

flush_context.execute()
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py", 
line 373, in execute

rec.execute(self)
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py", 
line 532, in execute

uow
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/persistence.py", 
line 170, in save_obj

mapper, table, update)
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/persistence.py", 
line 613, in _emit_update_statements

lambda rec: (
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/persistence.py", 
line 456, in _collect_update_commands

value, state.committed_state[propkey]):
  File 
"/opt/certwise-lcs/eggs/SQLAlchemy-1.0.2-py2.7-linux-x86_64.egg/sqlalchemy/sql/elements.py", 
line 2726, in __bool__

raise TypeError("Boolean value of this clause is not defined")
exceptions.TypeError: Boolean value of this clause is not defined


this error is not related to the code illustrated above; it has to do 
with an object that is present in session.dirty which has some kind of 
SQL expression clause inside of its state, but also seems related to 
using an odd kind of comparison function within a custom type, likely a 
PickleType that is using a custom "comparator" function.


I can create this stack trace exactly.  But I have to do something 
really weird to make it happen.   It doesn't provide the failure in 
0.9.9 so is a regression.  But super curious if you have something in 
your mapping that looks like this:


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

def comparator(a, b):
return a > b

class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
data = Column(PickleType(comparator=comparator))


e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)

s = Session(e)
s.add(A(data='some data'))
s.commit()

a1 = s.query(A).first()
a1.data = func.foo("im a SQL expression")
s.commit()


are you using PickleType, with a custom comparator that doesn't come 
down to using "==", and are using a SQL expression as the value to be 
persisted?













--
Bill
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 


  1   2   3   4   5   6   7   8   9   10   >