Hi Unfortunately a flavor + aggregate is not enough for our use case as it is still possible for the tenant to misconfigure a vm.
The edge case not covered by flavor + aggregate that we are trying to prevent is as follows. The operator creates an aggregate containing the nodes that require all VMs to use large pages. The operator creates flavors with and without memory backing specified. The tenant selects the aggregate containing nodes that only supports hugepages and a flavor that requires small or any. Or The tenant selects a flavor that requires small or any and does not select an aggregate. In both cases because the nodes may have non huge page memory available, it is possible to schedule A vm that will not use large pages to a node that requires large pages to be used. If this happens the behavior is undefined. The vm may boot and have not network connectivity in the case of vhost-user The vm may fail to boot or it may boot in some other error state. It would be possible however to introduce a new filter (AggregateMemoryBackingFilter) The AggregateMemoryBackingFilter would work as follows. The AggregateMemoryBackingFilter will compare the extra specifications associated with the instance and enforces the constraints set in the aggregate metadata. A new MemoryBacking attribute will be added to the aggregate metadata. The MemoryBacking attribute can be set to 1 or more of the flowing: small,large,4,2048,1048576 Syntax is SizeA,SizeB e.g. 2048,1048576 If small is set then host will only be passed if the vm requests small or 4k pages. If large is set then host will only be passed if the vm requests 2MB or 1GB. If the MemoryBacking element is not set for an aggregate the AggregateMemoryBackingFilter will pass all hosts With this new filter the (flavor or image properties) + aggregate approach would work for all driver not just libvirt. If this alternative is preferred I can resubmit as a new blueprint and mark the old blueprint as superseded. Regards Sean. -----Original Message----- From: Daniel P. Berrange [mailto:berra...@redhat.com] Sent: Wednesday, December 3, 2014 10:13 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes On Wed, Dec 03, 2014 at 10:03:06AM +0100, Sahid Orentino Ferdjaoui wrote: > On Tue, Dec 02, 2014 at 07:44:23PM +0000, Mooney, Sean K wrote: > > Hi all > > > > I have submitted a small blueprint to allow filtering of available > > memory pages Reported by libvirt. > > Can you address this with aggregate? this will also avoid to do > something specific in the driver libvirt. Which will have to be > extended to other drivers at the end. Agreed, I think you can address this by setting up host aggregates and then using setting the desired page size on the flavour. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| _______________________________________________ OpenStack-dev mailing list OpenStackemail@example.com http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStackfirstname.lastname@example.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev