On 07/19/2018 11:55 AM, Paul Belanger wrote:
On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote:
Hello,

While trying to get a new validation¹ in the undercloud preflight
checks, I hit an (not so) unexpected issue with the CI:
it doesn't provide flavors with the minimal requirements, at least
regarding the disk space.

A quick-fix is to disable the validations in the CI - Wes has already
pushed a patch for that in the upstream CI:
https://review.openstack.org/#/c/583275/
We can consider this as a quick'n'temporary fix².

The issue is on the RDO CI: apparently, they provide instances with
"only" 55G of free space, making the checks fail:
https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46

So, the question is: would it be possible to lower the requirment to,
let's say, 50G? Where does that 60G³ come from?

Thanks for your help/feedback.

Cheers,

C.



¹ https://review.openstack.org/#/c/582917/

² as you might know, there's a BP for a unified validation framework,
and it will allow to get injected configuration in CI env in order to
lower the requirements if necessary:
https://blueprints.launchpad.net/tripleo/+spec/validation-framework

³
http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements

Keep in mind, upstream we don't really have control over partitions of nodes, in
some case it is a single, other multiple. I'd suggest looking more at:

   https://docs.openstack.org/infra/manual/testing.html

And this isn't just a testing thing. As I mentioned in the previous thread, real-world users often use separate partitions for some data (logs, for example). Looking at the existing validation[1] I don't know that it would handle multiple partitions sufficiently well to turn it on by default. It's only checking /var and /, and I've seen much more complex partition layouts than that.

1: https://github.com/openstack/tripleo-validations/blob/master/validations/tasks/disk_space.yaml


As for downstream RDO, the same is going to apply once we start adding more
cloud providers. I would look to see if you actually need that much space for
deployments, and make try to mock the testing of that logic.

It's also worth noting that what we can get away with in ci is not necessarily appropriate for production. Being able to run a short-lived, single-use deployment in 50 GB doesn't mean that you could realistically run that on a long-lived production cloud. Log and database storage tends to increase over time. There should be a ceiling to how large that all grows if rotation and db cleanup is configured correctly, but that ceiling is much higher than anything ci is ever going to hit.

Anecdotally, I bumped my development flavor disk space to >50 GB because I ran out of space when I built containers locally. I don't know if that's something we expect users to be doing, but it is definitely possible to exhaust 50 GB in a short period of time.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to