On Thu, Feb 2, 2017 at 6:40 PM, Sean Dague <[email protected]> wrote: > On 02/02/2017 11:16 AM, Matthew Treinish wrote: > <snip> > > <oops, forgot to finish my though> > > > > We definitely aren't saying running a single worker is how we recommend > people > > run OpenStack by doing this. But it just adds on to the differences > between the > > gate and what we expect things actually look like. > > I'm all for actually getting to the bottom of this, but honestly real > memory profiling is needed here. The growth across projects probably > means that some common libraries are some part of this. The ever growing > requirements list is demonstrative of that. Code reuse is good, but if > we are importing much of a library to get access to a couple of > functions, we're going to take a bunch of memory weight on that > (especially if that library has friendly auto imports in top level > __init__.py so we can't get only the parts we want). >
Sounds like the new version of "oslo-incubator" idea. > > Changing the worker count is just shuffling around deck chairs. > > I'm not familiar enough with memory profiling tools in python to know > the right approach we should take there to get this down to individual > libraries / objects that are containing all our memory. Anyone more > skilled here able to help lead the way? > > -Sean > > -- > Sean Dague > http://dague.net > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: [email protected]?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin.
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
