Hi Zane, sorry for the delayed response. Comments inline.
On 05/06/2014 09:09 PM, Zane Bitter wrote:
On 05/05/14 13:40, Solly Ross wrote:
One thing that I was discussing with @jaypipes and @dansmith over
on IRC was the possibility of breaking flavors down into separate
components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor.
This way, you still get the control of the size of your building blocks
(e.g. you could restrict RAM to only 2GB, 4GB, or 16GB), but you avoid
exponential flavor explosion by separating out the axes.
Dimitry and I have discussed this on IRC already (no-one changed their
mind about anything as a result), but I just wanted to note here that I
think even this idea is crazy.
VMs are not allocated out of a vast global pool of resources. They're
allocated on actual machines that have physical hardware costing real
money in fixed ratios.
Here's a (very contrived) example. Say your standard compute node can
support 16 VCPUs and 64GB of RAM. You can sell a bunch of flavours:
maybe 1 VCPU + 4GB, 2 VCPU + 8GB, 4 VCPU + 16GB... &c. But if (as an
extreme example) you sell a server with 1 VCPU and 64GB of RAM you have
a big problem: 15 VCPUs that nobody has paid for and you can't sell.
(Disks add a new dimension of wrongness to the problem.)
You are assuming a public cloud provider use case above. As much as I
tend to focus on the utility cloud model, where the incentives are
around maximizing the usage of physical hardware by packing in as many
paying tenants into a fixed resource, this is only one domain for OpenStack.
There are, for good or bad, IT shops and telcos that frankly are willing
to dump money into an inordinate amount of hardware -- and see that
hardware be inefficiently used -- in order to appease the demands of
their application customer tenants. The impulse of onboarding teams for
these private cloud systems is to "just say yes", with utter disregard
to the overall cost efficiency of the proposed customer use cases.
If there was a simple switching mechanism that allowed a deployer to
turn on or off this ability to allow tenants to construct specialized
instance type configurations, then who really loses here? Public or
utility cloud providers would simply leave the switch to its default of
"off" and folks who wanted to provide this functionality to their users
could provide it. Of course, there are clear caveats around lack of
portability to other clouds -- but let's face it, cross-cloud
portability has other challenges beyond this particular point ;)
The insight of flavours, which is fundamental to the whole concept of
IaaS, is that users must pay the *opportunity cost* of their resource
usage. If you allow users to opt, at their own convenience, to pay only
the actual cost of the resources they use regardless of the opportunity
cost to you, then your incentives are no longer aligned with your
Again, the above assumes a utility cloud model. Sadly, that isn't the
only cloud model.
You'll initially be very popular with the kind of customers
who are taking advantage of you, but you'll have to hike prices across
the board to make up the cost leading to a sort of dead-sea effect. A
Gresham's Law of the cloud, if you will, where bad customers drive out
Simply put, a cloud allowing users to define their own flavours *loses*
to one with predefined flavours 10 times out of 10.
In the above example, you just tell the customer: bad luck, you want
64GB of RAM, you buy 16 VCPUs whether you want them or not. It can't
actually hurt to get _more_ than you wanted, even though you'd rather
not pay for it (provided, of course, that everyone else *is* paying for
it, and cross-subsidising you... which they won't).
Now, it's not the OpenStack project's job to prevent operators from
going bankrupt. But I think at the point where we are adding significant
complexity to the project just to enable people to confirm the
effectiveness of a very obviously infallible strategy for losing large
amounts of money, it's time to draw a line.
Actually, we're not proposing something more complex, IMO.
What I've been discussing on IRC and other places is getting rid of the
concept of flavours entirely except for in user interfaces, as an easy
way of templatizing the creation of instances. Once an instance is
launched, I've proposed that we don't store the instance_type_id with
the instance any more. Right now, we store the memory, CPU, and root
disk amounts in the instances table, so besides the instance_type
extra_specs information, there is currently no need to keep the concept
of an instance_type around after the instance launch sequence has been
initiated. The instance_type is decomposed into its resource units and
those resource units are used for scheduling decisions, not the flavour
itself. In this way, an instance_type is nothing more than a UI template
to make instance creation a bit easier.
The problem to date is that the introduction of the extra_specs stuff
was done at the flavour level. Improperly, IMO. The reason is because
the extra_specs don't represent quantitative resources, like the other
the attributes of a flavour do, but qualitative resource descriptions
(i.e. it's a "NUMA system", not "it has 4 CPUs"). This mismatch between
quantitative and qualitative is at the root of the problems we are
currently seeing with a number of proposed blueprints, from the
extensible resource tracker, to the problems inherent to the quota
management system, and all of the PCI-related blueprints.
By the way, the whole theory behind this idea seems to be that this:
nova server-create --cpu-flavor=4 --ram-flavour=16G --disk-flavor=200G
minimises the cognitive load on the user, whereas this:
nova server-create --flavor=4-64G-200G
will cause the user's brain to explode from its combinatorial
complexity. I find this theory absurd.
That isn't the theory at all; in fact, nobody has been talking about
cognitive dissonance at the UI level.
In other words, if you really want to lose some money, it's perfectly
feasible with the existing flavour implementation. The operator is only
ever 3 for-loops away from setting up every combination of flavours
possible from combining the CPU, RAM and disk options, and can even
apply whatever constraints they desire.
All that said, Heat will expose any API that Nova implements. Choose
OpenStack-dev mailing list
OpenStack-dev mailing list