Hi,
On Thu, Sep 29, 2016 at 11:57 PM, Alex Schultz <aschu...@redhat.com> wrote: > Hello all, > > So for many years we've been using either the service defaults > (usually python determined processor count) or the $processorcount > fact from facter in puppet for worker configuration options for the > OpenStack services. If you are currently using the default values > provided by the puppet modules, you will be affected by this upcoming > change. After much discussion and feedback from deployers, we've > decided to change this to a default value that has a cap on it. This > is primarily from the feedback when deploying on physical hardware > where processor counts can be 32, 48 or even 512. These values can > lead to excessive memory consumption or errors due to connection > limits (mysql/rabbit). As such we've come up with a new fact to that > will be used instead of $processorcount. > > The new fact is called $os_workers[0]. This fact uses the > $processorcount to help weigh in on the number of workers to configure > and won't be less than 2 but is capped at 8. The $os_workers fact > will use the larger value of either '2' or '# of processors / 4' but > will not exceed 8. The primary goal of this is to improve the user > experience when people install services using the puppet modules and > without having to tune all of these worker values. We plan on > implementing this for all modules as part of the Ocata cycle. This > work will can be tracked using the os_workers-fact[1] gerrit topic. > It should be noted that we have implemented this fact in such a way > that operators are free to override it using an external fact to > provide their own values as well. If you are currently specifying > your own values for the worker configurations in your manifests then > this change will not affect you. If you have been relying on the > defaults and wish to continue to use the $processorcount logic, we > would recommend either implementing your own external fact[2] for this > or updating your manifests to provide $::processorcount to the workers > configuration. > This doesn't help a lot. I saw the case where 8 neutron-servers allocated 6GB of RAM. From OOM perspective, the biggest process was MySQL (or Rabbit) as it doesn't calculate the sum of processes. Instead of killing neutron-server it killed MySQL to release some RAM to node. IMO, I would focus on cgroup limitation for OpenStack services as that will allow the operator to specify some the upper limit of CPU and RAM usage for every service. > As always we'd love to hear feedback on this and any other issues > people might be facing. We're always available in #puppet-openstack on > freenode or via the mailing lists. > > Thanks, > -Alex > > > [0] https://review.openstack.org/#/c/375146/ > [1] https://review.openstack.org/#/q/topic:os_workers-fact > [2] https://docs.puppet.com/facter/3.4/custom_facts.html#external-facts > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev