Hi Jay, there are indeed downsides with this setting. The code currently uses connection pooling in a way that each subtransaction end up using a distinct connection from the pool. As we have nested transactions in multiple points in Neutron's code, this leads to a situation where you can exhaust your pool.
This issue is already addressed by openstack.common db session management. Neutron is moving to that too. The patch [1] is under review at the moment, and we hope to be able to merge it soon. Another issue [2] has been reported which leads to connection exhaustion both at small and large scales, independently of whether db pooling is enabled. We are retriaging this issue after we've been informed that openstack.common db support (and reduction of db accesses from policy engine) did not solve the issue. Further, some plugins which drive a 3rd party backend might incur other issues when db pooling is enabled. As DB pooling increases the level of concurrency, it might happen that short-lived queries to the backend are performed while another long running query is executing. This is usually not harmful, except in cases when the short-lived query alters the portion of the state of the backend which the long running query is retrieving. Such events are usually observed during the initial synchronization of the DHCP server, and have been significantly mitigated by recent improvements in this procedure. Regards, Salvatore [1] https://review.openstack.org/#/c/27265/ [2] https://bugs.launchpad.net/tripleo/+bug/1184484 On 21 June 2013 20:44, Jay Buffington <[email protected]> wrote: > I'm moving a thread we had with some vmware guys to this list to make it > public. > > We had a problem with quantum deadlocking when it got several requests in > quick > succession. Aaron suggested we set sql_dbpool_enable = True. We did and > it > seemed to resolve our issue. > > What are the downsides of turning on sql_dbpool_enable? Should it be on > by default? > > Thanks, > Jay > > > >> We are currently experience the following problem in our environment: > >> issuing 5 'quantum port-create' commands in parallel effectively > deadlocks quantum: > >> > >> $ for n in $(seq 5); do echo 'quantum --insecure port-create > stage-net1'; done | parallel > >> An unknown exception occurred. > >> Request Failed: internal server error while processing your request. > >> An unexpected error occurred in the NVP Plugin:Unable to get logical > switches > > On Jun 21, 2013, at 9:36 AM, Aaron Rosen <[email protected]> wrote: > > We've encountered this issue as well. I'd try enabling: > > # Enable the use of eventlet's db_pool for MySQL. The flags > sql_min_pool_size, > > # sql_max_pool_size and sql_idle_timeout are relevant only if this is > enabled. > > > > sql_dbpool_enable = True > > > > in nvp.ini to see if that helps at all. In our internal cloud we removed > the > > creations of the lports in nvp from the transaction. Salvatore is > working on > > an async back-end to the plugin that will solve this and improve the > plugin > > performance. > > _______________________________________________ > OpenStack-dev mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
_______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
