Re: [sqlalchemy] quote=False for all columns in table

2017-06-30 Thread Mike Bayer
On Fri, Jun 30, 2017 at 5:10 PM, Seth P  wrote:
> I'm just accessing a vendor-provided Oracle database. The database itself is
> all upper(lower?) case (I'm new to Oracle),

case insensitive if unquoted names are used.


whereas I want my declarative
> model columns to be camelCase (and the emitted SQL to be unquoted camelCase)

once you emit the names case insensitively, Oracle will forever show
them as COLUMNNAME after that.  your casing will be lost on the
database so there is no reason to send over a camelCase name to Oracle
if it isn't to be quoted.

if you'd like your *Python code* to use camelCase, that is easy; name
the declarative attribute with your camelCase name (though note this
violates pep8), and ensure the column name is all lower case.

Assuming you are not using reflection or automap (which if you are
using a pre-fab database might be a lot less trouble), the easiest way
is to intercept the name when the column is being attached.  I know
this is 95% the same as setting the "quote" flag, but it's better to
set the column to be "case insensitive", e.g. set the name to all
lower case which is what SQLAlchemy uses to determine case sensitivity
of a name, rather than unconditionally quoting.  When you have a
column named "order" or "user" or something like that, you'll be
thankful:


from sqlalchemy import Column, String, Integer, create_engine
from sqlalchemy.orm import Session
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import event

Base = declarative_base()


@event.listens_for(Column, "before_parent_attach")
def _attach_column(column, table, **kw):
column.name = column.name.lower()


class A(Base):
__tablename__ = 'a'
myId = Column(Integer, primary_key=True)
someData = Column(String)
maxInterval = Column(Integer)


e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)

s = Session(e)

s.add(A(someData='sasefd', maxInterval=5))
s.commit()

print(s.query(A.someData, A.maxInterval)).all()





> to match the vendor's documentation. I could make the column names all lower
> case, but would like to use camelCase if there's an easy way to do it. Only
> annoying thing about a custom my_column() is that I'd have to specify the
> name argument explicitly for each column, right?
>
> On Friday, June 30, 2017 at 5:05:15 PM UTC-4, Mike Bayer wrote:
>>
>> On Fri, Jun 30, 2017 at 4:31 PM, Seth P  wrote:
>> > Is there a way (when using declarative) to specify that all the columns
>> > of a
>> > table should use quote=False without specifying it explicitly for each
>> > column?
>>
>> Easiest is just to call your own my_column(...) function that sets the
>> flag as you'd like, but setting quote=False unconditionally is a bad
>> idea and you shouldn't ever have to do that.   This sounds like you
>> are communicating with some special database backend for which you'd
>> be better off building a real dialect for it.
>>
>>
>> > I've tried setting __table_args__ = { 'quote_schema': False, 'quote':
>> > False
>> > }, but that just affects the schema and table name, not the column name.
>> >
>> > --
>> > SQLAlchemy -
>> > The Python SQL Toolkit and Object Relational Mapper
>> >
>> > http://www.sqlalchemy.org/
>> >
>> > To post example code, please provide an MCVE: Minimal, Complete, and
>> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> > description.
>> > ---
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "sqlalchemy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to sqlalchemy+...@googlegroups.com.
>> > To post to this group, send email to sqlal...@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails 

Re: [sqlalchemy] quote=False for all columns in table

2017-06-30 Thread Seth P
I'm just accessing a vendor-provided Oracle database. The database itself 
is all upper(lower?) case (I'm new to Oracle), whereas I want my 
declarative model columns to be camelCase (and the emitted SQL to be 
unquoted camelCase) to match the vendor's documentation. I could make the 
column names all lower case, but would like to use camelCase if there's an 
easy way to do it. Only annoying thing about a custom my_column() is that 
I'd have to specify the name argument explicitly for each column, right?

On Friday, June 30, 2017 at 5:05:15 PM UTC-4, Mike Bayer wrote:
>
> On Fri, Jun 30, 2017 at 4:31 PM, Seth P  
> wrote: 
> > Is there a way (when using declarative) to specify that all the columns 
> of a 
> > table should use quote=False without specifying it explicitly for each 
> > column? 
>
> Easiest is just to call your own my_column(...) function that sets the 
> flag as you'd like, but setting quote=False unconditionally is a bad 
> idea and you shouldn't ever have to do that.   This sounds like you 
> are communicating with some special database backend for which you'd 
> be better off building a real dialect for it. 
>
>
> > I've tried setting __table_args__ = { 'quote_schema': False, 'quote': 
> False 
> > }, but that just affects the schema and table name, not the column name. 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] quote=False for all columns in table

2017-06-30 Thread Mike Bayer
On Fri, Jun 30, 2017 at 4:31 PM, Seth P  wrote:
> Is there a way (when using declarative) to specify that all the columns of a
> table should use quote=False without specifying it explicitly for each
> column?

Easiest is just to call your own my_column(...) function that sets the
flag as you'd like, but setting quote=False unconditionally is a bad
idea and you shouldn't ever have to do that.   This sounds like you
are communicating with some special database backend for which you'd
be better off building a real dialect for it.


> I've tried setting __table_args__ = { 'quote_schema': False, 'quote': False
> }, but that just affects the schema and table name, not the column name.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Understanding sporadic ObjectDeletedError (in gevent)

2017-06-30 Thread Mike Bayer
On Fri, Jun 30, 2017 at 3:48 PM, Anders Steinlein  wrote:
> I see, and that makes sense. However, wouldn't that be true in regular
> threaded environments as well? Any idea why this is manifesting itself after
> we switched to gevent? Simply the fact that we're now more likely to get a
> different connection back from the pool than we were before?

it is absolutely true in a threaded environment as well.  What gevent
/ eventlet do normally is to modify the interleaving of concurrent
work in ways that are much less likely to occur under threading, so it
is typical that the switch to gevent smokes out lots of race
conditions that weren't apparent before; not to mention because you
can spawn up hundreds of greenlets with very little overhead.


>
>> if you need search_path set on the connection, there's two ways to go
>> about this:
>>
>> 1. ensure that happens for every checkout using a pool event or engine
>> connect event.
>
>
> Yup, I was thinking about that. A quick question before I go down this road:
> Would this cover the case where the transaction is aborted/rolled back or an
> exception occurs as well? That is, will that lead to a new checkout from the
> pool as well?

transactions being aborted or rolled back don't by themselves affect
the workflow of the connection pool.   If the operation succeeds or
fails on a connection, the connection is returned to the pool in the
same way.

The only thing which could arise is if the connection is invalidated,
due to a backend error such as the database being restarted, or other
error that the driver deems "we can't use this connection anymore".
In that case, the "connection" that you're dealing with will
transparently replace the real DBAPI connection its using, and if you
were to continue using it in the same request, you'd also want to use
the pool .connect event to intercept this new connection.Usually,
when a connection has a problem in the middle (that is, once it's
already done some SQL on the connection) of a web request like this,
the request just raises 500 and ends, so it doesn't matter.  Only if
your web app is anticipating DB connections being dropped in the
middle of  a request and is then retrying (which would be really
unusual) would this be an issue.


>
>> 2. keep the search path set at the request level, but then bind the
>> Session for that request as well:
>>
>> connection_for_my_request = engine.connect()
>> connection_for_my_request.execute("set search path")
>> session_for_my_request = Session(bind=connection_for_my_request)
>
>
> Huh, interesting approach. What would you consider the pros/cons of 1 vs 2.?
> Which approach would you be more inclined to use?

I'd likely use #2 because it is more explicit and simpler.


>
>> > Even so, if the search_path was reset to public, I would presume the row
>> > would still be found -- at least that is the case in a vanilla psql
>> > shell,
>> > as all tables are present in the public schema and inherited in each
>> > tenant's schema (thus making PostgreSQL search all inherited tables
>> > across
>> > all schemas).
>>
>> is the row in question local to a tenant or not?
>
>
> Yes, the particular row is in the tenant's schema. The public schema only
> has the table definitions, no data. So shouldn't SQLAlchemy also find it
> when the public schema is current (and the tenant's tables inherit from the
> ones in public)? Let me quickly illustrate from psql, as I'm having trouble
> understanding why SQLAlchemy wouldn't find the row if the search_path is
> indeed the underlying "issue":
>
> mm2_prod=> show search_path;
>   search_path
> 
>  "$user",public
>
> mm2_prod=> \d newsletters
>Table "public.newsletters"
> Column |Type |
> Modifiers
> ---+-+-
>  mid   | integer | not null default
> nextval('newsletters_mid_seq'::regclass)
> [... more columns and info...]
> Number of child tables: 1836 (Use \d+ to list them.)
>
> mm2_prod=> select mid from newsletters where mid = 146023;
>   mid
> 
>  146023
> (1 row)
>
> mm2_prod=> explain select mid from newsletters where mid = 146023;
>   QUERY PLAN
> --
>  Append  (cost=0.00..13373.55 rows=1837 width=4)
>->  Seq Scan on newsletters  (cost=0.00..0.00 rows=1 width=4)
>  Filter: (mid = 146023)
>->  Index Only Scan using newsletters_pkey on newsletters newsletters_1
> (cost=0.15..8.17 rows=1 width=4)
>  Index Cond: (mid = 146023)
>->  Index Only Scan using newsletters_pkey on newsletters newsletters_2
> (cost=0.28..8.29 rows=1 width=4)
>  Index Cond: (mid = 146023)
>->  Index Only Scan using newsletters_pkey on newsletters newsletters_3
> 

[sqlalchemy] quote=False for all columns in table

2017-06-30 Thread Seth P
Is there a way (when using declarative) to specify that all the columns of 
a table should use quote=False without specifying it explicitly for each 
column?
I've tried setting __table_args__ = { 'quote_schema': False, 'quote': False 
}, but that just affects the schema and table name, not the column name.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Understanding sporadic ObjectDeletedError (in gevent)

2017-06-30 Thread Anders Steinlein
On Fri, Jun 30, 2017 at 8:04 PM, Mike Bayer 
wrote:

> On Fri, Jun 30, 2017 at 11:52 AM, Anders Steinlein  wrote:
> > [...]
>
> >
> > I've set _use_threadlocal = True on the connection pool, as my thinking
> > initially was that we could be getting a different connection object back
> > from the pool after the commit. Could this still be the case? And/or the
> > search_path not being the same after the commit?
>
> that would be exactly what's happening here, there's not really a way
> guarantee getting the same connection back from two checkouts from a
> pool.  the "threadlocal" thing on the pool has to do with concurrent
> checkouts, not two checkouts/checkins in serial, and it's not a flag
> I'd recommend these days for anything.
>

I see, and that makes sense. However, wouldn't that be true in regular
threaded environments as well? Any idea why this is manifesting itself
after we switched to gevent? Simply the fact that we're now more likely to
get a different connection back from the pool than we were before?

if you need search_path set on the connection, there's two ways to go
> about this:
>
> 1. ensure that happens for every checkout using a pool event or engine
> connect event.
>

Yup, I was thinking about that. A quick question before I go down this
road: Would this cover the case where the transaction is aborted/rolled
back or an exception occurs as well? That is, will that lead to a new
checkout from the pool as well?

2. keep the search path set at the request level, but then bind the
> Session for that request as well:
>
> connection_for_my_request = engine.connect()
> connection_for_my_request.execute("set search path")
> session_for_my_request = Session(bind=connection_for_my_request)
>

Huh, interesting approach. What would you consider the pros/cons of 1 vs
2.? Which approach would you be more inclined to use?

> Even so, if the search_path was reset to public, I would presume the row
> > would still be found -- at least that is the case in a vanilla psql
> shell,
> > as all tables are present in the public schema and inherited in each
> > tenant's schema (thus making PostgreSQL search all inherited tables
> across
> > all schemas).
>
> is the row in question local to a tenant or not?


Yes, the particular row is in the tenant's schema. The public schema only
has the table definitions, no data. So shouldn't SQLAlchemy also find it
when the public schema is current (and the tenant's tables inherit from the
ones in public)? Let me quickly illustrate from psql, as I'm having trouble
understanding why SQLAlchemy wouldn't find the row if the search_path is
indeed the underlying "issue":

mm2_prod=> show search_path;
  search_path

 "$user",public

mm2_prod=> \d newsletters
   Table "public.newsletters"
Column |Type |
 Modifiers
---+-+-
 mid   | integer | not null default
nextval('newsletters_mid_seq'::regclass)
[... more columns and info...]
Number of child tables: 1836 (Use \d+ to list them.)

mm2_prod=> select mid from newsletters where mid = 146023;
  mid

 146023
(1 row)

mm2_prod=> explain select mid from newsletters where mid = 146023;
  QUERY PLAN
--
 Append  (cost=0.00..13373.55 rows=1837 width=4)
   ->  Seq Scan on newsletters  (cost=0.00..0.00 rows=1 width=4)
 Filter: (mid = 146023)
   ->  Index Only Scan using newsletters_pkey on newsletters newsletters_1
 (cost=0.15..8.17 rows=1 width=4)
 Index Cond: (mid = 146023)
   ->  Index Only Scan using newsletters_pkey on newsletters newsletters_2
 (cost=0.28..8.29 rows=1 width=4)
 Index Cond: (mid = 146023)
   ->  Index Only Scan using newsletters_pkey on newsletters newsletters_3
 (cost=0.14..8.16 rows=1 width=4)
 Index Cond: (mid = 146023)
[ ... etc etc for all 1837 tenant schemas, just to show what it does ...]

This particular row is in the "eliksir" schema, inherited from the public
table:

mailmojo=# set search_path = eliksir;
SET

mm2_prod=> \d newsletters
   Table "eliksir.newsletters"
Column |Type |
 Modifiers
---+-+-
 mid   | integer | not null default
nextval('newsletters_mid_seq'::regclass)
[... more columns and info...]
Inherits: public.newsletters

mm2_prod=> select mid from mails where mid = 146023;
  mid

 146023
(1 row)



> if the row here is
> in "public" and the query has no dependency on the tenant schema
> then...you need to figure out first the nature of this "object
> deleted" error, put it in a 

Re: [sqlalchemy] Understanding sporadic ObjectDeletedError (in gevent)

2017-06-30 Thread Mike Bayer
On Fri, Jun 30, 2017 at 11:52 AM, Anders Steinlein  wrote:
> After this switch, we're seeing sporadic ObjectDeletedError exceptions
> illustrated by the following simple lines of code:
>
>   @newsletters.route('/create/', methods=['GET', 'POST'])
>   @login_required
>   def create():
> newsletter = Newsletter()
> # db is Flask-SQLAlchemy's SQLAlchemy object instance
> db.session.add(newsletter)
> # Commit explicitly to ensure we have a mid to redirect to.
> db.session.commit()
> return redirect(url_for('.set_recipients', id=newsletter.mid))
>
> Newsletter is a regular SQLAlchemy ORM Base, where mid is a plain Integer
> column as a primary key (i.e. a serial in the database).
>
> Accessing newsletter.mid there throws an exception once in every 50-ish
> executions or some such: Instance '' has been
> deleted, or its row is otherwise not present.
>
> The database row is surely not deleted.
>
> This is a multi-tenant application, whereby pretty much the first thing we
> do in every request is db.session.execute('SET search_path TO {.username},
> public'.format(current_user)).

>
> I've set _use_threadlocal = True on the connection pool, as my thinking
> initially was that we could be getting a different connection object back
> from the pool after the commit. Could this still be the case? And/or the
> search_path not being the same after the commit?

that would be exactly what's happening here, there's not really a way
guarantee getting the same connection back from two checkouts from a
pool.  the "threadlocal" thing on the pool has to do with concurrent
checkouts, not two checkouts/checkins in serial, and it's not a flag
I'd recommend these days for anything.

if you need search_path set on the connection, there's two ways to go
about this:

1. ensure that happens for every checkout using a pool event or engine
connect event.

2. keep the search path set at the request level, but then bind the
Session for that request as well:

connection_for_my_request = engine.connect()
connection_for_my_request.execute("set search path")
session_for_my_request = Session(bind=connection_for_my_request)

>
> Even so, if the search_path was reset to public, I would presume the row
> would still be found -- at least that is the case in a vanilla psql shell,
> as all tables are present in the public schema and inherited in each
> tenant's schema (thus making PostgreSQL search all inherited tables across
> all schemas).

is the row in question local to a tenant or not?   if the row here is
in "public" and the query has no dependency on the tenant schema
then...you need to figure out first the nature of this "object
deleted" error, put it in a try/except and in the except, take the
Session.connection() and interrogate it for current schema, whether or
not the row can be found, etc.   dump it all to the log.

stack trace for the ObjectDeletedError is also important here as that
would show where it's actually happening.


>
> No master/slave is involved, just a single database server. The SQLAlchemy
> pool is the default QueuePool with only default options (apart from
> _use_threadlocal as mentioned above).
>
> Any guidance to figuring this out would be much appreciated. And thanks for
> an awesome library!
>
> Relevant package versions:
> SQLAlchemy==1.1.11
> Flask-SQLAlchemy==2.1
> psycopg2==2.6.2
> uWSGI==2.0.15
> gevent==1.2.2
> greenlet==0.4.12
>
>
> Best,
> Anders
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Understanding sporadic ObjectDeletedError (in gevent)

2017-06-30 Thread Anders Steinlein
Hi,

We're using SQLAlchemy (see below for versions) on Flask running on uWSGI
in gevent mode, with psycopg2 to PostgreSQL 9.4. We recently switched from
regular prefork+threaded to gevent mode, and with it applied all gevent
monkey patches and the psycogreen package for the psycopg2 gevent wait
callback.

After this switch, we're seeing sporadic ObjectDeletedError exceptions
illustrated by the following simple lines of code:

  @newsletters.route('/create/', methods=['GET', 'POST'])
  @login_required
  def create():
newsletter = Newsletter()
# db is Flask-SQLAlchemy's SQLAlchemy object instance
db.session.add(newsletter)
# Commit explicitly to ensure we have a mid to redirect to.
db.session.commit()
return redirect(url_for('.set_recipients', id=newsletter.mid))

Newsletter is a regular SQLAlchemy ORM Base, where mid is a plain Integer
column as a primary key (i.e. a serial in the database).

Accessing newsletter.mid there throws an exception once in every 50-ish
executions or some such: Instance '' has been
deleted, or its row is otherwise not present.

The database row is surely not deleted.

This is a multi-tenant application, whereby pretty much the first thing we
do in every request is db.session.execute('SET search_path TO {.username},
public'.format(current_user)).

I've set _use_threadlocal = True on the connection pool, as my thinking
initially was that we could be getting a different connection object back
from the pool after the commit. Could this still be the case? And/or the
search_path not being the same after the commit?

Even so, if the search_path was reset to public, I would presume the row
would still be found -- at least that is the case in a vanilla psql shell,
as all tables are present in the public schema and inherited in each
tenant's schema (thus making PostgreSQL search all inherited tables across
all schemas).

No master/slave is involved, just a single database server. The SQLAlchemy
pool is the default QueuePool with only default options (apart from
_use_threadlocal as mentioned above).

Any guidance to figuring this out would be much appreciated. And thanks for
an awesome library!

Relevant package versions:
SQLAlchemy==1.1.11
Flask-SQLAlchemy==2.1
psycopg2==2.6.2
uWSGI==2.0.15
gevent==1.2.2
greenlet==0.4.12


Best,
Anders

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Optimize SQLAlchemy query

2017-06-30 Thread Mike Bayer
On Fri, Jun 30, 2017 at 8:56 AM, Simon King  wrote:
> Your "sql time" measurement only includes the time it takes to send
> the query to the database. I don't think it includes the time that the
> database has to evaluate the query and return the rows. To measure
> that, you ought to take the SQL string that the query produces,
> execute that string outside of the ORM, and fetch all the rows back.
> You should do that first with a bare PG connection, then with
> SQLAlchemy Core, and compare those 2 times to the time you get with
> the ORM.
>
> It's also worth noting that your query returns a lot of rows. I
> haven't run your script, but it looks like you've got 254
> Registrations, each with 88 RegistrationData items, so that'll be
> 22352 rows. For each of those rows, SA has to pick out the columns
> representing the primary keys of the Registration, RegistrationData,
> RegistrationFormFieldData and RegistrationFormItem classes, then check
> them to see if they correspond to objects that are already in the
> session. If they aren't, it has to construct those objects and link
> them together. Processing 22352 rows in 0.91 seconds doesn't seem bad
> to me.

Simon is correct here and the majority of that time is spent
constructing Python objects from those rows.  A full description of
what you are observing, how to profile it, an a link to a full
performance "example" suite, is at:
http://docs.sqlalchemy.org/en/latest/faq/performance.html#how-can-i-profile-a-sqlalchemy-powered-application



>
> If you can reply with the bare PG and SA core timings, we may have a
> better idea of how to optimise your query.
>
> Simon
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full 
> description.
> ---
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Optimize SQLAlchemy query

2017-06-30 Thread Simon King
On Fri, Jun 30, 2017 at 12:44 PM, mvidalgarcia  wrote:
> Hi, I'm performing a query to some data models but the ORM response time
> looks much higher compared to the SQL one. I have a script to reproduce the
> problem here.
> Currently getting values like
>
>> sql time: 0.068972826004
>> orm time: 0.915776014328
>
>
> and I consider there's a notable overhead in the ORM time.
>
> Do you know how I can optimize this query to reduce the overhead?
>

Your "sql time" measurement only includes the time it takes to send
the query to the database. I don't think it includes the time that the
database has to evaluate the query and return the rows. To measure
that, you ought to take the SQL string that the query produces,
execute that string outside of the ORM, and fetch all the rows back.
You should do that first with a bare PG connection, then with
SQLAlchemy Core, and compare those 2 times to the time you get with
the ORM.

It's also worth noting that your query returns a lot of rows. I
haven't run your script, but it looks like you've got 254
Registrations, each with 88 RegistrationData items, so that'll be
22352 rows. For each of those rows, SA has to pick out the columns
representing the primary keys of the Registration, RegistrationData,
RegistrationFormFieldData and RegistrationFormItem classes, then check
them to see if they correspond to objects that are already in the
session. If they aren't, it has to construct those objects and link
them together. Processing 22352 rows in 0.91 seconds doesn't seem bad
to me.

If you can reply with the bare PG and SA core timings, we may have a
better idea of how to optimise your query.

Simon

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Optimize SQLAlchemy query

2017-06-30 Thread mvidalgarcia
Hi, I'm performing a query to some data models but the ORM response time 
looks much higher compared to the SQL one. I have a script to reproduce the 
problem here 
.
Currently getting values like

sql time: 0.068972826004
> orm time: 0.915776014328

 
and I consider there's a notable overhead in the ORM time.

Do you know how I can optimize this query to reduce the overhead?

Thanks

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.