On Fri, Jun 30, 2017 at 3:48 PM, Anders Steinlein <and...@e5r.no> wrote:
> I see, and that makes sense. However, wouldn't that be true in regular
> threaded environments as well? Any idea why this is manifesting itself after
> we switched to gevent? Simply the fact that we're now more likely to get a
> different connection back from the pool than we were before?

it is absolutely true in a threaded environment as well.  What gevent
/ eventlet do normally is to modify the interleaving of concurrent
work in ways that are much less likely to occur under threading, so it
is typical that the switch to gevent smokes out lots of race
conditions that weren't apparent before; not to mention because you
can spawn up hundreds of greenlets with very little overhead.


>
>> if you need search_path set on the connection, there's two ways to go
>> about this:
>>
>> 1. ensure that happens for every checkout using a pool event or engine
>> connect event.
>
>
> Yup, I was thinking about that. A quick question before I go down this road:
> Would this cover the case where the transaction is aborted/rolled back or an
> exception occurs as well? That is, will that lead to a new checkout from the
> pool as well?

transactions being aborted or rolled back don't by themselves affect
the workflow of the connection pool.   If the operation succeeds or
fails on a connection, the connection is returned to the pool in the
same way.

The only thing which could arise is if the connection is invalidated,
due to a backend error such as the database being restarted, or other
error that the driver deems "we can't use this connection anymore".
In that case, the "connection" that you're dealing with will
transparently replace the real DBAPI connection its using, and if you
were to continue using it in the same request, you'd also want to use
the pool .connect event to intercept this new connection.    Usually,
when a connection has a problem in the middle (that is, once it's
already done some SQL on the connection) of a web request like this,
the request just raises 500 and ends, so it doesn't matter.  Only if
your web app is anticipating DB connections being dropped in the
middle of  a request and is then retrying (which would be really
unusual) would this be an issue.


>
>> 2. keep the search path set at the request level, but then bind the
>> Session for that request as well:
>>
>>     connection_for_my_request = engine.connect()
>>     connection_for_my_request.execute("set search path....")
>>     session_for_my_request = Session(bind=connection_for_my_request)
>
>
> Huh, interesting approach. What would you consider the pros/cons of 1 vs 2.?
> Which approach would you be more inclined to use?

I'd likely use #2 because it is more explicit and simpler.


>
>> > Even so, if the search_path was reset to public, I would presume the row
>> > would still be found -- at least that is the case in a vanilla psql
>> > shell,
>> > as all tables are present in the public schema and inherited in each
>> > tenant's schema (thus making PostgreSQL search all inherited tables
>> > across
>> > all schemas).
>>
>> is the row in question local to a tenant or not?
>
>
> Yes, the particular row is in the tenant's schema. The public schema only
> has the table definitions, no data. So shouldn't SQLAlchemy also find it
> when the public schema is current (and the tenant's tables inherit from the
> ones in public)? Let me quickly illustrate from psql, as I'm having trouble
> understanding why SQLAlchemy wouldn't find the row if the search_path is
> indeed the underlying "issue":
>
> mm2_prod=> show search_path;
>   search_path
> ----------------
>  "$user",public
>
> mm2_prod=> \d newsletters
>                                        Table "public.newsletters"
>     Column     |            Type             |
> Modifiers
> ---------------+-----------------------------+-----------------------------------------------------
>  mid           | integer                     | not null default
> nextval('newsletters_mid_seq'::regclass)
> [... more columns and info...]
> Number of child tables: 1836 (Use \d+ to list them.)
>
> mm2_prod=> select mid from newsletters where mid = 146023;
>   mid
> --------
>  146023
> (1 row)
>
> mm2_prod=> explain select mid from newsletters where mid = 146023;
>                                           QUERY PLAN
> ----------------------------------------------------------------------------------------------
>  Append  (cost=0.00..13373.55 rows=1837 width=4)
>    ->  Seq Scan on newsletters  (cost=0.00..0.00 rows=1 width=4)
>          Filter: (mid = 146023)
>    ->  Index Only Scan using newsletters_pkey on newsletters newsletters_1
> (cost=0.15..8.17 rows=1 width=4)
>          Index Cond: (mid = 146023)
>    ->  Index Only Scan using newsletters_pkey on newsletters newsletters_2
> (cost=0.28..8.29 rows=1 width=4)
>          Index Cond: (mid = 146023)
>    ->  Index Only Scan using newsletters_pkey on newsletters newsletters_3
> (cost=0.14..8.16 rows=1 width=4)
>          Index Cond: (mid = 146023)
> [ ... etc etc for all 1837 tenant schemas, just to show what it does ...]
>
> This particular row is in the "eliksir" schema, inherited from the public
> table:
>
> mailmojo=# set search_path = eliksir;
> SET
>
> mm2_prod=> \d newsletters
>                                        Table "eliksir.newsletters"
>     Column     |            Type             |
> Modifiers
> ---------------+-----------------------------+-----------------------------------------------------
>  mid           | integer                     | not null default
> nextval('newsletters_mid_seq'::regclass)
> [... more columns and info...]
> Inherits: public.newsletters
>
> mm2_prod=> select mid from mails where mid = 146023;
>   mid
> --------
>  146023
> (1 row)
>
>
>>
>> if the row here is
>> in "public" and the query has no dependency on the tenant schema
>> then...you need to figure out first the nature of this "object
>> deleted" error, put it in a try/except and in the except, take the
>> Session.connection() and interrogate it for current schema, whether or
>> not the row can be found, etc.   dump it all to the log.
>>
>> stack trace for the ObjectDeletedError is also important here as that
>> would show where it's actually happening.
>
>
> Yes, adding more logging will be next. But I want to try to understand what
> is going on as well. :) It's very helpful to get some pointers and input
> here though, thanks yet again!
>
>
> Best,
> Anders
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to