Re: [sqlalchemy] Re: Pool connections high-water, session query counter?

2017-09-01 Thread Mike Bayer
doesn't sound like a lock (MySQL is a very un-lockish database),
sounds like you have a query that occasionally returns a huge number
of rows, such as from a broken JOIN condition or similar.The
occasional nature without any errors, as well as that the operation
you say should only be doing reads, suggests it is data and query time
related.

I'd use fine-grained logging with thread/process ids to identify slow
requests and potentially which kinds of operations are hitting the
issue.



On Fri, Sep 1, 2017 at 4:43 PM, YKdvd  wrote:
> Turned on the slow query log, doesn't seem to be that.  The fact that it
> isn't consistently bad for similar hits, and also seems to need some load to
> be exposed makes me thing either some contention while inside the request in
> Python, including lazy loads.  I have been able to get some timings on a
> somewhat (but not hugely) slow hit on one of the specific routes while
> running the Flask app in standalone debug mode, and it seems to indicate
> that the time is being taken inside Python, no problem with a lack of Apache
> or WSGI threads causing the delay, and before doing any rendering of the
> HTML template for return.  It certainly seems to be related to DB stuff,
> rather than any CPU/Python issues.
>
> I'm willing to believe that this is something that was present in our old
> MySQL 5.5 setup at a lower frequency or shorter delays, but I'm not sure
> what in the new setup would have caused a sudden exacerbation of the
> problem.  The old 5.5 MySQL had a query cache of either 8M or 16M, the new
> 5.7 MySQL started with a 64M query_cache.  I've now disabled the
> query_cache, and while overall performance seems a little better, the odd
> spikes are still there.  All tables of interest should be InnoDB, and
> there's a 2GB innodb_buffer_pool_size, which should be well in excess of
> actual data.
>
> The specific request route I've been trying to look at should only be doing
> reads, no writes or updates, and I wouldn't think any of the little
> one-to-one relationships lazyloads it might indirectly trigger would be
> locked by a different thread, certainly not for appreciable time.  I'm going
> to try debugging with the pools "timeout" set to a very low value (it
> defaults to 30 seconds) to see if it ever hits max_overflow and fails to
> instantly get a connection when needed, but out WSGI processes have 10
> threads and the pool shouldn't hit this until 5+10=15 connections are
> active, and unless I'm missing something I should not be using more than one
> connection per thread - this was why I was checking for a built-in
> high-water counter for pool connections.
>
> I'll keep checking my understanding of MySQL and SQLAlchemy for anything
> that could be holding a row locked that I'm not realizing, or something
> similar.  SQLAlchemy isn't throwing any exceptions related to deadlocks,
> connections exhaused, etc, as these would bubble up and abort the Flask
> request.
>
>
> On Friday, September 1, 2017 at 4:51:08 PM UTC-3, Jonathan Vanasco wrote:
>>
>> what do your database and server logs show?  are there slow query
>> timeouts, deadlocks, etc?  are these happening during peak load? 10-30
>> seconds looks more like an issue with competing locks or available workers,
>> than potentially pool checkouts or lazy loading.
>>
>> aside from db logs, i've found statsd very useful for debugging these
>> sorts of things.  we log both requests count/timing and hundreds of points
>> in our codebase as well.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Pool connections high-water, session query counter?

2017-09-01 Thread YKdvd
Turned on the slow query log, doesn't seem to be that.  The fact that it 
isn't consistently bad for similar hits, and also seems to need some load 
to be exposed makes me thing either some contention while inside the 
request in Python, including lazy loads.  I have been able to get some 
timings on a somewhat (but not hugely) slow hit on one of the specific 
routes while running the Flask app in standalone debug mode, and it seems 
to indicate that the time is being taken inside Python, no problem with a 
lack of Apache or WSGI threads causing the delay, and before doing any 
rendering of the HTML template for return.  It certainly seems to be 
related to DB stuff, rather than any CPU/Python issues.

I'm willing to believe that this is something that was present in our old 
MySQL 5.5 setup at a lower frequency or shorter delays, but I'm not sure 
what in the new setup would have caused a sudden exacerbation of the 
problem.  The old 5.5 MySQL had a query cache of either 8M or 16M, the new 
5.7 MySQL started with a 64M query_cache.  I've now disabled the 
query_cache, and while overall performance seems a little better, the odd 
spikes are still there.  All tables of interest should be InnoDB, and 
there's a 2GB innodb_buffer_pool_size, which should be well in excess of 
actual data.

The specific request route I've been trying to look at should only be doing 
reads, no writes or updates, and I wouldn't think any of the little 
one-to-one relationships lazyloads it might indirectly trigger would be 
locked by a different thread, certainly not for appreciable time.  I'm 
going to try debugging with the pools "timeout" set to a very low value (it 
defaults to 30 seconds) to see if it ever hits max_overflow and fails to 
instantly get a connection when needed, but out WSGI processes have 10 
threads and the pool shouldn't hit this until 5+10=15 connections are 
active, and unless I'm missing something I should not be using more than 
one connection per thread - this was why I was checking for a built-in 
high-water counter for pool connections.

I'll keep checking my understanding of MySQL and SQLAlchemy for anything 
that could be holding a row locked that I'm not realizing, or something 
similar.  SQLAlchemy isn't throwing any exceptions related to deadlocks, 
connections exhaused, etc, as these would bubble up and abort the Flask 
request.

On Friday, September 1, 2017 at 4:51:08 PM UTC-3, Jonathan Vanasco wrote:
>
> what do your database and server logs show?  are there slow query 
> timeouts, deadlocks, etc?  are these happening during peak load? 10-30 
> seconds looks more like an issue with competing locks or available workers, 
> than potentially pool checkouts or lazy loading.
>
> aside from db logs, i've found statsd very useful for debugging these 
> sorts of things.  we log both requests count/timing and hundreds of points 
> in our codebase as well.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] several questions about connection pooling behaviour

2017-09-01 Thread Антонио Антуан
Ok, thank you for the answer, appreciate it.

пятница, 1 сентября 2017 г., 21:11:26 UTC+3 пользователь Mike Bayer написал:
>
> On Fri, Sep 1, 2017 at 12:35 PM, Антонио Антуан  > wrote: 
> > Yes, I tried it and it is worked fine. 
> > Another thing that I tried to understand: "Are connections, which used 
> by 
> > `session`, `session.bind`, compiled_query and engine, the same?". 
> > `pg_stat_activity` shows me, that I am right, 
> > Can you confirm this statement? 
>
> *if* you are using StaticPool and AssertionPool, and you are *not* 
> using multiple threads against it as in particular AssertionPool is 
> not really threadsafe (StaticPool is mostly threadsafe though I can 
> see it might race on the first connect), and also you haven't called 
> engine.dispose(), then there should be just the one connection hanging 
> around, yes.DBAPI connections themselves are not generally 
> threadsafe so these pools assume single-threaded use. 
>
>
>
> In such case a warning about deadlocks with 
> > "reset_on_return=None" not so horrible with StaticPool or AssertionPool. 
> > Here is code snippet: 
> > https://gist.github.com/aCLr/61896fc652a0a61a21dbe3531ed695a8 
>
> I'm not sure what you're ultimately trying to accomplish but I would 
> definitely not recommend patterns like StaticPool, 
> reset_on_return=None for anything significant.   You will leave the 
> connection hanging open in a transaction while the application is not 
> doing anything.   I don't see why any of the patterns you are 
> illustrating should ever be necessary. 
>
>
> >> 
> >> And when I specify 
> >> > that parameter with `None` value, failed only first asserion ('failed 
> on 
> >> > compiled query execution') because, as I think, another connection is 
> >> > used. 
> >> > My question is the same: isn't it wrong? And if it is, how can I 
> avoid 
> >> > AsserionError on compiled query execution? Should I use private 
> >> > `_connection_for_bind` method of Session class or there are any other 
> >> > options? 
> >> > 
> >> > 
> >> > 
> >> > -- 
> >> > SQLAlchemy - 
> >> > The Python SQL Toolkit and Object Relational Mapper 
> >> > 
> >> > http://www.sqlalchemy.org/ 
> >> > 
> >> > To post example code, please provide an MCVE: Minimal, Complete, and 
> >> > Verifiable Example. See http://stackoverflow.com/help/mcve for a 
> full 
> >> > description. 
> >> > --- 
> >> > You received this message because you are subscribed to the Google 
> >> > Groups 
> >> > "sqlalchemy" group. 
> >> > To unsubscribe from this group and stop receiving emails from it, 
> send 
> >> > an 
> >> > email to sqlalchemy+...@googlegroups.com. 
> >> > To post to this group, send email to sqlal...@googlegroups.com. 
> >> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> >> > For more options, visit https://groups.google.com/d/optout. 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Pool connections high-water, session query counter?

2017-09-01 Thread Jonathan Vanasco
what do your database and server logs show?  are there slow query timeouts, 
deadlocks, etc?  are these happening during peak load? 10-30 seconds looks 
more like an issue with competing locks or available workers, than 
potentially pool checkouts or lazy loading.

aside from db logs, i've found statsd very useful for debugging these sorts 
of things.  we log both requests count/timing and hundreds of points in our 
codebase as well.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Pool connections high-water, session query counter?

2017-09-01 Thread YKdvd
We have an internal Flask/SQLAlchemy webapp using MySQL and served up using 
Apache and mod_wsgi.  We recently upgraded the server, going from an older 
Debian/Apache2.2/MySQL5.5/mod_wsgi 3.x, to a current Ubuntu/Apache 
2.4/MySQL5.7/mod_wsgi4.5.x.  Something seems to be causing some of our page 
requests to occasionally take 10-30 seconds where the same or similar 
requests are much quicker at a different time.  One of the things I'm 
looking at is lazy/eager loading.  It looks like from this thread 

 
that a Session has no built-in counter of the number of queries it has 
executed?  I'll look at adding the event-handler suggestion to get a count.

Also, I'm curious if the QueuePool for the engines in use are ever hitting 
the max (pool_size+max_overflow).  There's a QueuePool.status() method 
which returns something like "Pool size: 5  Connections in pool: 1 Current 
Overflow: -4 Current Checked out connections: 0", but from a quick scan of 
the source it looks like nothing keeps track of the highest value that the 
queue's "overflow" ever reaches?  I don't think the sessions in my requests 
would use more that one connection, and the pools should be configured to 
allow more connections than the WSGI process has handler threads, but if 
I'm somehow wrong it could be that a session is hitting the pools "wait" 
period in trying to acquire a connection.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] several questions about connection pooling behaviour

2017-09-01 Thread Mike Bayer
On Fri, Sep 1, 2017 at 12:35 PM, Антонио Антуан  wrote:
> Yes, I tried it and it is worked fine.
> Another thing that I tried to understand: "Are connections, which used by
> `session`, `session.bind`, compiled_query and engine, the same?".
> `pg_stat_activity` shows me, that I am right,
> Can you confirm this statement?

*if* you are using StaticPool and AssertionPool, and you are *not*
using multiple threads against it as in particular AssertionPool is
not really threadsafe (StaticPool is mostly threadsafe though I can
see it might race on the first connect), and also you haven't called
engine.dispose(), then there should be just the one connection hanging
around, yes.DBAPI connections themselves are not generally
threadsafe so these pools assume single-threaded use.



In such case a warning about deadlocks with
> "reset_on_return=None" not so horrible with StaticPool or AssertionPool.
> Here is code snippet:
> https://gist.github.com/aCLr/61896fc652a0a61a21dbe3531ed695a8

I'm not sure what you're ultimately trying to accomplish but I would
definitely not recommend patterns like StaticPool,
reset_on_return=None for anything significant.   You will leave the
connection hanging open in a transaction while the application is not
doing anything.   I don't see why any of the patterns you are
illustrating should ever be necessary.


>>
>> And when I specify
>> > that parameter with `None` value, failed only first asserion ('failed on
>> > compiled query execution') because, as I think, another connection is
>> > used.
>> > My question is the same: isn't it wrong? And if it is, how can I avoid
>> > AsserionError on compiled query execution? Should I use private
>> > `_connection_for_bind` method of Session class or there are any other
>> > options?
>> >
>> >
>> >
>> > --
>> > SQLAlchemy -
>> > The Python SQL Toolkit and Object Relational Mapper
>> >
>> > http://www.sqlalchemy.org/
>> >
>> > To post example code, please provide an MCVE: Minimal, Complete, and
>> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> > description.
>> > ---
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "sqlalchemy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to sqlalchemy+...@googlegroups.com.
>> > To post to this group, send email to sqlal...@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Updating pg8000 dialect following new release of pg8000

2017-09-01 Thread Mike Bayer
On Fri, Sep 1, 2017 at 12:09 PM, Tony Locke  wrote:
> No I don't think it's a bug because pg8000 is designed to always create a
> prepared statement if one doesn't already exists, and so that entails
> keeping some data for each prepared statement, steadily increasing memory
> use. There's a pg8000.max_prepared_statements parameter (set to 1000 by
> default) which, if exceeded, triggers the closing of all prepared
> statements. So memory use never goes off to infinity.

oh.So if you set that to like 100, the test here would pass for
you, right?  the memory use tests try for quite a while to see if
object use tilts down again since limited size caches are common.

Is this value available on the connect() function or through an
environment variable?



>
> On 1 Sep 2017 4:54 p.m., "Mike Bayer"  wrote:
>>
>> On Fri, Sep 1, 2017 at 10:57 AM, Tony Locke  wrote:
>> > So, next problem. The test
>> >
>> > test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::test_many_updates
>> > fails because pg8000 creates a new prepared statement for each unique
>> > sql
>> > statement, which entails a creation of object within the driver, and so
>> > the
>> > memory usage keeps increasing. I've marked this test with
>> > @testing.fails_on('postgresql+pg8000', 'prepared statements use
>> > memory'),
>> > and now all the test test/aaa_profiling/test_memusage.py tests pass.
>>
>> do you consider that to be a bug in pg8000?
>>
>>
>>
>>
>>
>> >
>> > Now, on with the other tests!
>> >
>> > On Thursday, 31 August 2017 15:15:16 UTC+1, Mike Bayer wrote:
>> >>
>> >> On Thu, Aug 31, 2017 at 7:02 AM, Tony Locke 
>> >> wrote:
>> >> > There's a new release (1.11.0) of the pg8000 driver for PostgreSQL.
>> >> > It's
>> >> > a
>> >> > pure-python driver, and it already has a dialect for SQLAlchemy. This
>> >> > latest
>> >> > release is not backwardly compatible with the previous release, and
>> >> > I'm
>> >> > trying to modify the dialect accordingly. The main change is that
>> >> > connections and cursors are no longer threadsafe. In DB-API terms it
>> >> > has
>> >> > a
>> >> > threadsafety value of 1 (Threads may share the module, but not
>> >> > connections).
>> >> >
>> >> > So the first problem I ran into was in the on_connect() method in the
>> >> > dialiect. It referred to the 'unicode' keyword, which caused a
>> >> > problem
>> >> > under
>> >> > Python 3. So I deleted the following:
>> >> >
>> >> >  def on_connect(conn):
>> >> >  conn.py_types[quoted_name] = conn.py_types[unicode]
>> >> >  fns.append(on_connect)
>> >> >
>> >>
>> >> that was a recent fix for a regression:
>> >>
>> >>
>> >>
>> >> https://github.com/zzzeek/sqlalchemy/commit/03560c4b83308719067ec635662c35f9a437fb7f
>> >>
>> >> it is fixed in
>> >>
>> >>
>> >>
>> >> https://github.com/zzzeek/sqlalchemy/commit/d0470e296ea589620c94d8f2dd37e94b8f03842a
>> >>
>> >>
>> >> > just to get the tests going. The next problem is:
>> >> >
>> >> >
>> >> >
>> >> > test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::()::test_alias_pathing
>> >> >
>> >> > which I think fails due to a threading problem. It seems that somehow
>> >> > the
>> >> > DB-API connection is shared between threads, which isn't supported
>> >> > any
>> >> > more
>> >> > with pg8000. So my question is, is it the case that:
>> >> >
>> >> > a. DB-API connections must be threadsafe to work with SQLAlchemy.
>> >>
>> >> not at all, SQLAlchemy holds that its own Session and Connection which
>> >> ultimately refer to the DBAPI connection are themselves not
>> >> threadsafe.
>> >>
>> >> > b. There's a something wrong with the test.
>> >>
>> >> the memusage tests are often omitted from normal testing as they are
>> >> extremely process/memory intensive, but they don't spawn any threads.
>> >>   In the past few weeks the memusage suite has been altered such that
>> >> each memusage test is run in a separate *process* however, so there is
>> >> some concurrency going on.   When the new process is created, the test
>> >> makes a new connection pool so that it should not refer to any
>> >> database connections that were transferred to the child fork, however
>> >> it also doesn't try to close them or anything else - they should be
>> >> totally ignored.   However if pg8000 is tracking some kind of global
>> >> state, like a collection of prepared statements, this state needs to
>> >> travel across the process boundary as well without impacting the
>> >> parent process even if the child process ends.
>> >>
>> >> The failure can be isolated by doing a pdb like this:
>> >>
>> >> diff --git a/test/aaa_profiling/test_memusage.py
>> >> b/test/aaa_profiling/test_memusage.py
>> >> index 3181cfe61..ff600b85d 100644
>> >> --- a/test/aaa_profiling/test_memusage.py
>> >> +++ b/test/aaa_profiling/test_memusage.py
>> >> @@ -636,6 +636,8 @@ class 

Re: [sqlalchemy] several questions about connection pooling behaviour

2017-09-01 Thread Антонио Антуан


пятница, 1 сентября 2017 г., 18:08:35 UTC+3 пользователь Mike Bayer написал:
>
> On Fri, Sep 1, 2017 at 8:26 AM, Антонио Антуан  > wrote: 
> > Hi guys. 
> > I have several questions and, possibly, misunderstandings about 
> sqlalchemy 
> > connection pooling behavior 
> > 
> > 1. I try to figure out if this is normal: 
> > https://gist.github.com/aCLr/be78f3495892978ee868c9b5adcef0e6 
> > 
> > As you can see, I checked that `bind` is the same on `get_bind()` 
> invokation 
> > and on attribute call. 
> > I see, that each connection into session binds only for session, not for 
> > bind... 
> > Is it correct? I expected that each connection is bound for `bind` 
> > (`engine`). 
>
> that's normal because your exception is when you try to execute 
> directly from the Engine. 
>
> this is equivalent: 
>
> engine = create_engine(conn_string, 
>poolclass=pool.AssertionPool) 
> conn = engine.connect() 
> conn.scalar("select 1")  # ok 
> conn.execute("select 1").scalar()  # ok 
> engine.execute("select 1") # not OK, the engine doesn't know about 
> your "conn", it checks out a new connection 
>
>
> > 
> > 2. Here is the problem, on which I started my research: 
> > https://gist.github.com/aCLr/b642c822b0bd3c9ec1fe775452479acd 
> > I don't use commit here, only flush. But I expected that results on each 
> > execution will be the same, because I used same bind on query 
> compilation 
> > and on `Session.query...` execution. 
> > 
> > Yes, I've read about `reset_on_return` parameter of Pool. 
>
> right, there's some odd patterns here, but it is "reset_on_return" 
> that is ultimately hitting the rollback here. 
>
> The query is wrong too, needs a FROM: 
>
> _query = 
> Session.query(func.count(literal('*'))).select_from(Test).limit(1).statement.compile(bind=Session.bind,
>  
>
> dialect=Session.bind.dialect) 
>
> and the assertions need to use scalar(): 
>
> assert (_query.scalar() == amount), 'failed on compiled query execution' 
>
>
> reset on return, as well as some nice debugging so you can see things 
> happening: 
>
> engine = create_engine(conn_string, 
>poolclass=pool.StaticPool, echo='debug', 
> echo_pool='debug', 
>pool_reset_on_return=None) 
>
>
> Yes, I tried it and it is worked fine.
Another thing that I tried to understand: "Are connections, which used by 
`session`, `session.bind`, compiled_query and engine, the same?". 
`pg_stat_activity` shows me, that I am right,
Can you confirm this statement? In such case a warning about deadlocks with 
"reset_on_return=None" not so horrible with StaticPool or AssertionPool.
Here is code snippet:
https://gist.github.com/aCLr/61896fc652a0a61a21dbe3531ed695a8

> And when I specify 
> > that parameter with `None` value, failed only first asserion ('failed on 
> > compiled query execution') because, as I think, another connection is 
> used. 
> > My question is the same: isn't it wrong? And if it is, how can I avoid 
> > AsserionError on compiled query execution? Should I use private 
> > `_connection_for_bind` method of Session class or there are any other 
> > options? 
> > 
> > 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Updating pg8000 dialect following new release of pg8000

2017-09-01 Thread Tony Locke
No I don't think it's a bug because pg8000 is designed to always create a
prepared statement if one doesn't already exists, and so that entails
keeping some data for each prepared statement, steadily increasing memory
use. There's a pg8000.max_prepared_statements parameter (set to 1000 by
default) which, if exceeded, triggers the closing of all prepared
statements. So memory use never goes off to infinity.

On 1 Sep 2017 4:54 p.m., "Mike Bayer"  wrote:

> On Fri, Sep 1, 2017 at 10:57 AM, Tony Locke  wrote:
> > So, next problem. The test
> > test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_
> postgresql+pg8000_9_5_8::test_many_updates
> > fails because pg8000 creates a new prepared statement for each unique sql
> > statement, which entails a creation of object within the driver, and so
> the
> > memory usage keeps increasing. I've marked this test with
> > @testing.fails_on('postgresql+pg8000', 'prepared statements use
> memory'),
> > and now all the test test/aaa_profiling/test_memusage.py tests pass.
>
> do you consider that to be a bug in pg8000?
>
>
>
>
>
> >
> > Now, on with the other tests!
> >
> > On Thursday, 31 August 2017 15:15:16 UTC+1, Mike Bayer wrote:
> >>
> >> On Thu, Aug 31, 2017 at 7:02 AM, Tony Locke 
> wrote:
> >> > There's a new release (1.11.0) of the pg8000 driver for PostgreSQL.
> It's
> >> > a
> >> > pure-python driver, and it already has a dialect for SQLAlchemy. This
> >> > latest
> >> > release is not backwardly compatible with the previous release, and
> I'm
> >> > trying to modify the dialect accordingly. The main change is that
> >> > connections and cursors are no longer threadsafe. In DB-API terms it
> has
> >> > a
> >> > threadsafety value of 1 (Threads may share the module, but not
> >> > connections).
> >> >
> >> > So the first problem I ran into was in the on_connect() method in the
> >> > dialiect. It referred to the 'unicode' keyword, which caused a problem
> >> > under
> >> > Python 3. So I deleted the following:
> >> >
> >> >  def on_connect(conn):
> >> >  conn.py_types[quoted_name] = conn.py_types[unicode]
> >> >  fns.append(on_connect)
> >> >
> >>
> >> that was a recent fix for a regression:
> >>
> >>
> >> https://github.com/zzzeek/sqlalchemy/commit/
> 03560c4b83308719067ec635662c35f9a437fb7f
> >>
> >> it is fixed in
> >>
> >>
> >> https://github.com/zzzeek/sqlalchemy/commit/
> d0470e296ea589620c94d8f2dd37e94b8f03842a
> >>
> >>
> >> > just to get the tests going. The next problem is:
> >> >
> >> >
> >> > test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_
> postgresql+pg8000_9_5_8::()::test_alias_pathing
> >> >
> >> > which I think fails due to a threading problem. It seems that somehow
> >> > the
> >> > DB-API connection is shared between threads, which isn't supported any
> >> > more
> >> > with pg8000. So my question is, is it the case that:
> >> >
> >> > a. DB-API connections must be threadsafe to work with SQLAlchemy.
> >>
> >> not at all, SQLAlchemy holds that its own Session and Connection which
> >> ultimately refer to the DBAPI connection are themselves not
> >> threadsafe.
> >>
> >> > b. There's a something wrong with the test.
> >>
> >> the memusage tests are often omitted from normal testing as they are
> >> extremely process/memory intensive, but they don't spawn any threads.
> >>   In the past few weeks the memusage suite has been altered such that
> >> each memusage test is run in a separate *process* however, so there is
> >> some concurrency going on.   When the new process is created, the test
> >> makes a new connection pool so that it should not refer to any
> >> database connections that were transferred to the child fork, however
> >> it also doesn't try to close them or anything else - they should be
> >> totally ignored.   However if pg8000 is tracking some kind of global
> >> state, like a collection of prepared statements, this state needs to
> >> travel across the process boundary as well without impacting the
> >> parent process even if the child process ends.
> >>
> >> The failure can be isolated by doing a pdb like this:
> >>
> >> diff --git a/test/aaa_profiling/test_memusage.py
> >> b/test/aaa_profiling/test_memusage.py
> >> index 3181cfe61..ff600b85d 100644
> >> --- a/test/aaa_profiling/test_memusage.py
> >> +++ b/test/aaa_profiling/test_memusage.py
> >> @@ -636,6 +636,8 @@ class MemUsageWBackendTest(EnsureZeroed):
> >>  try:
> >>  go()
> >>  finally:
> >> +import pdb
> >> +pdb.set_trace()
> >>  metadata.drop_all()
> >>  clear_mappers()
> >>
> >> This brings me right to a clean state where "next" will produce the
> >> error.   Looking at Postgresql processes within the block, there are
> >> no open transactions to the DB.  If you pdb right here, you can poke
> >> around to see what state might be present.
> >>
> >>
> >>
> >>
> >> > c. Something else.

Re: [sqlalchemy] Updating pg8000 dialect following new release of pg8000

2017-09-01 Thread Mike Bayer
On Fri, Sep 1, 2017 at 10:57 AM, Tony Locke  wrote:
> So, next problem. The test
> test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::test_many_updates
> fails because pg8000 creates a new prepared statement for each unique sql
> statement, which entails a creation of object within the driver, and so the
> memory usage keeps increasing. I've marked this test with
> @testing.fails_on('postgresql+pg8000', 'prepared statements use memory'),
> and now all the test test/aaa_profiling/test_memusage.py tests pass.

do you consider that to be a bug in pg8000?





>
> Now, on with the other tests!
>
> On Thursday, 31 August 2017 15:15:16 UTC+1, Mike Bayer wrote:
>>
>> On Thu, Aug 31, 2017 at 7:02 AM, Tony Locke  wrote:
>> > There's a new release (1.11.0) of the pg8000 driver for PostgreSQL. It's
>> > a
>> > pure-python driver, and it already has a dialect for SQLAlchemy. This
>> > latest
>> > release is not backwardly compatible with the previous release, and I'm
>> > trying to modify the dialect accordingly. The main change is that
>> > connections and cursors are no longer threadsafe. In DB-API terms it has
>> > a
>> > threadsafety value of 1 (Threads may share the module, but not
>> > connections).
>> >
>> > So the first problem I ran into was in the on_connect() method in the
>> > dialiect. It referred to the 'unicode' keyword, which caused a problem
>> > under
>> > Python 3. So I deleted the following:
>> >
>> >  def on_connect(conn):
>> >  conn.py_types[quoted_name] = conn.py_types[unicode]
>> >  fns.append(on_connect)
>> >
>>
>> that was a recent fix for a regression:
>>
>>
>> https://github.com/zzzeek/sqlalchemy/commit/03560c4b83308719067ec635662c35f9a437fb7f
>>
>> it is fixed in
>>
>>
>> https://github.com/zzzeek/sqlalchemy/commit/d0470e296ea589620c94d8f2dd37e94b8f03842a
>>
>>
>> > just to get the tests going. The next problem is:
>> >
>> >
>> > test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::()::test_alias_pathing
>> >
>> > which I think fails due to a threading problem. It seems that somehow
>> > the
>> > DB-API connection is shared between threads, which isn't supported any
>> > more
>> > with pg8000. So my question is, is it the case that:
>> >
>> > a. DB-API connections must be threadsafe to work with SQLAlchemy.
>>
>> not at all, SQLAlchemy holds that its own Session and Connection which
>> ultimately refer to the DBAPI connection are themselves not
>> threadsafe.
>>
>> > b. There's a something wrong with the test.
>>
>> the memusage tests are often omitted from normal testing as they are
>> extremely process/memory intensive, but they don't spawn any threads.
>>   In the past few weeks the memusage suite has been altered such that
>> each memusage test is run in a separate *process* however, so there is
>> some concurrency going on.   When the new process is created, the test
>> makes a new connection pool so that it should not refer to any
>> database connections that were transferred to the child fork, however
>> it also doesn't try to close them or anything else - they should be
>> totally ignored.   However if pg8000 is tracking some kind of global
>> state, like a collection of prepared statements, this state needs to
>> travel across the process boundary as well without impacting the
>> parent process even if the child process ends.
>>
>> The failure can be isolated by doing a pdb like this:
>>
>> diff --git a/test/aaa_profiling/test_memusage.py
>> b/test/aaa_profiling/test_memusage.py
>> index 3181cfe61..ff600b85d 100644
>> --- a/test/aaa_profiling/test_memusage.py
>> +++ b/test/aaa_profiling/test_memusage.py
>> @@ -636,6 +636,8 @@ class MemUsageWBackendTest(EnsureZeroed):
>>  try:
>>  go()
>>  finally:
>> +import pdb
>> +pdb.set_trace()
>>  metadata.drop_all()
>>  clear_mappers()
>>
>> This brings me right to a clean state where "next" will produce the
>> error.   Looking at Postgresql processes within the block, there are
>> no open transactions to the DB.  If you pdb right here, you can poke
>> around to see what state might be present.
>>
>>
>>
>>
>> > c. Something else.
>> >
>> > Thanks for your help!
>> >
>> > --
>> > SQLAlchemy -
>> > The Python SQL Toolkit and Object Relational Mapper
>> >
>> > http://www.sqlalchemy.org/
>> >
>> > To post example code, please provide an MCVE: Minimal, Complete, and
>> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> > description.
>> > ---
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "sqlalchemy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to sqlalchemy+...@googlegroups.com.
>> > To post to this group, send email to sqlal...@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> > 

Re: [sqlalchemy] replicating/triggering a session connection issue for testing?

2017-09-01 Thread Jonathan Vanasco
thank you kindly, mike!

On Friday, September 1, 2017 at 10:51:16 AM UTC-4, Mike Bayer wrote:
>
> for simulating connection problems I use mock heavily.If you look 
> at test/engine/test_reconnect.py you can see there's a whole MockDBAPI 
> that can be instructed to "expode" in a specific way. 
>
> On Thu, Aug 31, 2017 at 9:30 PM, Jonathan Vanasco  > wrote: 
> > I discovered an edge-case in one of our apps today.  A form got pummeled 
> by 
> > a botnet which was going through a list of compromised account data for 
> hits 
> > (e.g. credentials that match haveibeenpwned.com).  That triggered a lot 
> of 
> > internal metrics logging, which eventually a dedicated logging 
> postgresql 
> > server out of disk space for a bit. 
> > 
> > the same logging server also handles our "exception" logging, which is 
> done 
> > via a separate dedicated session.  something in our stack somehow didn't 
> > close the connection/reset the session when it should have, and the 
> > session/connections stayed active.  that led to the connection pool 
> > constantly checking out bad sessions for several hours. 
> > 
> > free space was quickly recovered, and every other SqlAlchemy system in 
> the 
> > same app was immediately working fine.  the only thing that broke, and 
> > stayed broken, was our "python exception" logger. 
> > 
> > does anyone know if there is a way for me to somehow break/wedge a given 
> > engine's connection?  I'm trying to recreate this edge-case, and failing 
> > miserably. 
> > 
> > 
> > the error was `sqlalchemy.exc.InvalidRequestError ` with the message: 
> > 
> > InvalidRequestError: This Session's transaction has been rolled back 
> due 
> > to a previous exception during flush. To begin a new transaction with 
> this 
> > Session, first issue Session.rollback(). Original exception was: 
> > (psycopg2.OperationalError) could not extend file "base/16410/23816": No 
> > space left on device 
> > 
> > 
> > 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] several questions about connection pooling behaviour

2017-09-01 Thread Mike Bayer
On Fri, Sep 1, 2017 at 8:26 AM, Антонио Антуан  wrote:
> Hi guys.
> I have several questions and, possibly, misunderstandings about sqlalchemy
> connection pooling behavior
>
> 1. I try to figure out if this is normal:
> https://gist.github.com/aCLr/be78f3495892978ee868c9b5adcef0e6
>
> As you can see, I checked that `bind` is the same on `get_bind()` invokation
> and on attribute call.
> I see, that each connection into session binds only for session, not for
> bind...
> Is it correct? I expected that each connection is bound for `bind`
> (`engine`).

that's normal because your exception is when you try to execute
directly from the Engine.

this is equivalent:

engine = create_engine(conn_string,
   poolclass=pool.AssertionPool)
conn = engine.connect()
conn.scalar("select 1")  # ok
conn.execute("select 1").scalar()  # ok
engine.execute("select 1") # not OK, the engine doesn't know about
your "conn", it checks out a new connection


>
> 2. Here is the problem, on which I started my research:
> https://gist.github.com/aCLr/b642c822b0bd3c9ec1fe775452479acd
> I don't use commit here, only flush. But I expected that results on each
> execution will be the same, because I used same bind on query compilation
> and on `Session.query...` execution.
>
> Yes, I've read about `reset_on_return` parameter of Pool.

right, there's some odd patterns here, but it is "reset_on_return"
that is ultimately hitting the rollback here.

The query is wrong too, needs a FROM:

_query = 
Session.query(func.count(literal('*'))).select_from(Test).limit(1).statement.compile(bind=Session.bind,
dialect=Session.bind.dialect)

and the assertions need to use scalar():

assert (_query.scalar() == amount), 'failed on compiled query execution'


reset on return, as well as some nice debugging so you can see things happening:

engine = create_engine(conn_string,
   poolclass=pool.StaticPool, echo='debug',
echo_pool='debug',
   pool_reset_on_return=None)


And when I specify
> that parameter with `None` value, failed only first asserion ('failed on
> compiled query execution') because, as I think, another connection is used.
> My question is the same: isn't it wrong? And if it is, how can I avoid
> AsserionError on compiled query execution? Should I use private
> `_connection_for_bind` method of Session class or there are any other
> options?
>
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Updating pg8000 dialect following new release of pg8000

2017-09-01 Thread Tony Locke
Thanks Mike, fetching the latest commits from SQLAlchemy solved the 
'unicode' problem.

Looking at the multiprocess problem, the pg8000 driver holds global state 
for remembering the prepared statements it's created. This means that it 
fails if multiple processes access it, even if the access is in series and 
not concurrent. I've done a (temporary?) fix by keying the state on the 
current process id. So now test/aaa_profiling/test_
memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::()::test_alias_pathing
 
passes. Yay!

So, next problem. The test 
test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::test_many_updates
 
fails because pg8000 creates a new prepared statement for each unique sql 
statement, which entails a creation of object within the driver, and so the 
memory usage keeps increasing. I've marked this test with 
@testing.fails_on('postgresql+pg8000', 'prepared statements use memory'), 
and now all the test test/aaa_profiling/test_memusage.py tests pass.

Now, on with the other tests!

On Thursday, 31 August 2017 15:15:16 UTC+1, Mike Bayer wrote:
>
> On Thu, Aug 31, 2017 at 7:02 AM, Tony Locke  > wrote: 
> > There's a new release (1.11.0) of the pg8000 driver for PostgreSQL. It's 
> a 
> > pure-python driver, and it already has a dialect for SQLAlchemy. This 
> latest 
> > release is not backwardly compatible with the previous release, and I'm 
> > trying to modify the dialect accordingly. The main change is that 
> > connections and cursors are no longer threadsafe. In DB-API terms it has 
> a 
> > threadsafety value of 1 (Threads may share the module, but not 
> connections). 
> > 
> > So the first problem I ran into was in the on_connect() method in the 
> > dialiect. It referred to the 'unicode' keyword, which caused a problem 
> under 
> > Python 3. So I deleted the following: 
> > 
> >  def on_connect(conn): 
> >  conn.py_types[quoted_name] = conn.py_types[unicode] 
> >  fns.append(on_connect) 
> > 
>
> that was a recent fix for a regression: 
>
>
> https://github.com/zzzeek/sqlalchemy/commit/03560c4b83308719067ec635662c35f9a437fb7f
>  
>
> it is fixed in 
>
>
> https://github.com/zzzeek/sqlalchemy/commit/d0470e296ea589620c94d8f2dd37e94b8f03842a
>  
>
>
> > just to get the tests going. The next problem is: 
> > 
> > 
> test/aaa_profiling/test_memusage.py::MemUsageWBackendTest_postgresql+pg8000_9_5_8::()::test_alias_pathing
>  
>
> > 
> > which I think fails due to a threading problem. It seems that somehow 
> the 
> > DB-API connection is shared between threads, which isn't supported any 
> more 
> > with pg8000. So my question is, is it the case that: 
> > 
> > a. DB-API connections must be threadsafe to work with SQLAlchemy. 
>
> not at all, SQLAlchemy holds that its own Session and Connection which 
> ultimately refer to the DBAPI connection are themselves not 
> threadsafe. 
>
> > b. There's a something wrong with the test. 
>
> the memusage tests are often omitted from normal testing as they are 
> extremely process/memory intensive, but they don't spawn any threads. 
>   In the past few weeks the memusage suite has been altered such that 
> each memusage test is run in a separate *process* however, so there is 
> some concurrency going on.   When the new process is created, the test 
> makes a new connection pool so that it should not refer to any 
> database connections that were transferred to the child fork, however 
> it also doesn't try to close them or anything else - they should be 
> totally ignored.   However if pg8000 is tracking some kind of global 
> state, like a collection of prepared statements, this state needs to 
> travel across the process boundary as well without impacting the 
> parent process even if the child process ends. 
>
> The failure can be isolated by doing a pdb like this: 
>
> diff --git a/test/aaa_profiling/test_memusage.py 
> b/test/aaa_profiling/test_memusage.py 
> index 3181cfe61..ff600b85d 100644 
> --- a/test/aaa_profiling/test_memusage.py 
> +++ b/test/aaa_profiling/test_memusage.py 
> @@ -636,6 +636,8 @@ class MemUsageWBackendTest(EnsureZeroed): 
>  try: 
>  go() 
>  finally: 
> +import pdb 
> +pdb.set_trace() 
>  metadata.drop_all() 
>  clear_mappers() 
>
> This brings me right to a clean state where "next" will produce the 
> error.   Looking at Postgresql processes within the block, there are 
> no open transactions to the DB.  If you pdb right here, you can poke 
> around to see what state might be present. 
>
>
>
>
> > c. Something else. 
> > 
> > Thanks for your help! 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > 

Re: [sqlalchemy] replicating/triggering a session connection issue for testing?

2017-09-01 Thread Mike Bayer
for simulating connection problems I use mock heavily.If you look
at test/engine/test_reconnect.py you can see there's a whole MockDBAPI
that can be instructed to "expode" in a specific way.

On Thu, Aug 31, 2017 at 9:30 PM, Jonathan Vanasco  wrote:
> I discovered an edge-case in one of our apps today.  A form got pummeled by
> a botnet which was going through a list of compromised account data for hits
> (e.g. credentials that match haveibeenpwned.com).  That triggered a lot of
> internal metrics logging, which eventually a dedicated logging postgresql
> server out of disk space for a bit.
>
> the same logging server also handles our "exception" logging, which is done
> via a separate dedicated session.  something in our stack somehow didn't
> close the connection/reset the session when it should have, and the
> session/connections stayed active.  that led to the connection pool
> constantly checking out bad sessions for several hours.
>
> free space was quickly recovered, and every other SqlAlchemy system in the
> same app was immediately working fine.  the only thing that broke, and
> stayed broken, was our "python exception" logger.
>
> does anyone know if there is a way for me to somehow break/wedge a given
> engine's connection?  I'm trying to recreate this edge-case, and failing
> miserably.
>
>
> the error was `sqlalchemy.exc.InvalidRequestError ` with the message:
>
> InvalidRequestError: This Session's transaction has been rolled back due
> to a previous exception during flush. To begin a new transaction with this
> Session, first issue Session.rollback(). Original exception was:
> (psycopg2.OperationalError) could not extend file "base/16410/23816": No
> space left on device
>
>
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: several questions about connection pooling behaviour

2017-09-01 Thread Антонио Антуан


пятница, 1 сентября 2017 г., 15:26:11 UTC+3 пользователь Антонио Антуан 
написал:
>
> Hi guys. 
> I have several questions and, possibly, misunderstandings about sqlalchemy 
> connection pooling behavior
>
> 1. I try to figure out if this is normal:
> https://gist.github.com/aCLr/be78f3495892978ee868c9b5adcef0e6
>
> As you can see, I checked that `bind` is the same on `get_bind()` 
> invokation and on attribute call.
> I see, that each connection into session binds only for session, not for 
> bind...
> Is it correct? I expected that each connection is bound for `bind` 
> (`engine`).
>
>
>
>
> 2. Here is the problem, on which I started my research:
> https://gist.github.com/aCLr/b642c822b0bd3c9ec1fe775452479acd
> I don't use commit here, only flush. But I expected that results on each 
> execution will be the same, because I used same bind on query compilation 
> and on `Session.query...` execution.
>
> Yes, I've read about `reset_on_return` parameter of Pool. And when I 
> specify that parameter with `None` value, failed only first asserion 
> ('failed on compiled query execution') because, as I think, another 
> connection is used. 
> My question is the same: isn't it wrong? And if it is, how can I avoid 
> AsserionError on compiled query execution? Should I use private `
> _connection_for_bind` method of Session class or there are any other 
> options?
>

My fault, compiled query was wrong.
Fixed it:
https://gist.github.com/aCLr/22d6ed61c1dd49530d30c218a65a2680
Now the second raw sql fails without `pool_reset_on_return=None`

>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] several questions about connection pooling behaviour

2017-09-01 Thread Антонио Антуан
Hi guys. 
I have several questions and, possibly, misunderstandings about sqlalchemy 
connection pooling behavior

1. I try to figure out if this is normal:
https://gist.github.com/aCLr/be78f3495892978ee868c9b5adcef0e6

As you can see, I checked that `bind` is the same on `get_bind()` 
invokation and on attribute call.
I see, that each connection into session binds only for session, not for 
bind...
Is it correct? I expected that each connection is bound for `bind` 
(`engine`).




2. Here is the problem, on which I started my research:
https://gist.github.com/aCLr/b642c822b0bd3c9ec1fe775452479acd
I don't use commit here, only flush. But I expected that results on each 
execution will be the same, because I used same bind on query compilation 
and on `Session.query...` execution.

Yes, I've read about `reset_on_return` parameter of Pool. And when I 
specify that parameter with `None` value, failed only first asserion 
('failed on compiled query execution') because, as I think, another 
connection is used. 
My question is the same: isn't it wrong? And if it is, how can I avoid 
AsserionError on compiled query execution? Should I use private `
_connection_for_bind` method of Session class or there are any other 
options?



-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.