Le 21/01/2014 12:57, Day, Phil a écrit :
So, I actually don't think the two concepts (reservations and
"isolated instances") are competing ideas. Isolated instances are
actually not reserved. They are simply instances that have a
condition placed on their assignment to a particular compute node
that the node must only be hosting other instances of one or more
specified projects (tenants).
I got your idea. This filter [1] already does most of the work,
although it relies on aggregates and requires admin management. The
main issue with isolated instances is that it requires kind of
capacity planning for making sure you can cope with the load, that's
why we placed the idea of having such a placement scheduler.

[1] :

https://github.com/openstack/nova/blob/master/nova/scheduler/filters/a
ggregate_multitenancy_isolation.py
Right, the difference between that and my proposed solution would be
there would be no dependency on any aggregate at all.

I do understand your point about capacity planning in light of such scheduling
functionality -- due to the higher likelihood that compute nodes would be
unable to service a more general workload from other tenants.

But I believe that the two concerns can be tackled separately.

Exactly - that's why I wanted to start this debate about the way forward for 
the Pcloud Blueprint, which was heading into some kind of middle ground.  As 
per my original post, and it sounds like the three of us are at least aligned 
I'm proposing to spilt this into two streams:

i) A new BP that introduces the equivalent of AWS dedicated instances.
        User - Only has to specify that at boot time that the instance must be 
on a host used exclusively by that tenant.
        Scheduler - ether finds a hoist which matches this constraint or it 
doesn't.   No linkage to aggregates (other than that from other filters), no 
need for the aggregate to have been pre-configured
        Compute Manager - has to check the constraint (as with any other 
scheduler limit) and add the info that this is a dedicated instance to 
notification messages
        Operator - has to manage capacity as they do for any other such 
constraint (it is a significant capacity mgmt issue, but no worse in my mind 
that having flavors that can consume most of a host) , and work out how they 
want to charge for such a model (flat rate additional charge for first such 
instance, charge each time a new host is used, etc).

I think there is clear water between this and the existing aggregate based 
isolation.  I also think this is a different use case from reservations.   It's 
*mostly* like a new scheduler hint, but because it has billing impacts I think 
it needs to be more than just that - for example the ability to request a 
dedicated instance is something that should be controlled by a specific role.


I agree with that, that's another scheduler filter with extra scheduler hint, plus a notification message on the AMQP queue thanks to an handler. That's not role of Nova but Ceilometer to handle billable items and meters consolidation. That said, AWS dedicated instances are backed by VPC, so that's not fairly identical from what you propose here. Here the proposal is more likely making me thinking of AWS "reserved" instances without a contractual period.

IMHO, this model is interesting but hard to use for operators, because they don't have visibility on the capacity. Anyway, if Nova would provide this feature, Climate would be glad using it (and on a personal note, I would be glad contributing to it).

ii) Leave the concept of "private clouds within a cloud"  to something that can 
be handled at the region level.  I think there are valid use cases here, but it doesn’t 
make sense to try and get this kind of granularity within Nova.

Agreed.

_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to