Hi all,
I'd like to follow up on a few discussions that took place last week in Boston,
specifically in the Compute Instance/Volume Affinity for HPC session
(https://etherpad.openstack.org/p/BOS-forum-compute-instance-volume-affinity-hpc).
In this session, the discussions all trended towards adding more complexity to
the Nova UX, like adding --near and --distance flags to the nova boot command
to have the scheduler figure out how to place an instance near some other
resource, adding more fields to flavors or flavor extra specs, etc.
My question is: is it the right question to ask how to add more fine-grained
complications to the OpenStack user experience to support what seemed like a
pretty narrow use case?
The only use case that I remember hearing was an operator not wanting it to be
possible for a user to launch an instance in a particular Nova AZ and then not
be able to attach a volume from a different Cinder AZ, or they try to boot an
instance from a volume in the wrong place and get a failure to launch. This
seems okay to me, though - either the user has to rebuild their instance in the
right place or Nova will just return an error during instance build. Is it
worth adding all sorts of convolutions to Nova to avoid the possibility that
somebody might have to build instances a second time?
The feedback I get from my cloud-experienced users most frequently is that they
want to know why the OpenStack user experience in the storage area is so
radically different from AWS, which is what they all have experience with. I
don't really have a great answer for them, except to admit that in our clouds
they just have to know what combination of flavors and Horizon options or BDM
structure is going to get them the right tradeoff between storage durability
and speed. I was pleased with how the session on expanding Cinder's role for
Nova ephemeral storage went because of the suggestion of reducing Nova
imagebackend's role to just the file driver and having Cinder take over for
everything else. That, to me, is the kind of simplification that's a win-win
for both devs and ops: devs get to radically simplify a thorny part of the Nova
codebase, storage driver development only has to happen in Cinder, operators
get a storage workflow that's easier to explain to users.
Am I off base in the view of not wanting to add more options to nova boot and
more logic to the scheduler? I know the AWS comparison is a little North
America-centric (this came up at the summit a few times that EMEA/APAC
operators may have very different ideas of a normal cloud workflow), but I am
striving to give my users a private cloud that I can define for them in terms
of AWS workflows and vocabulary. AWS by design restricts where your volumes can
live (you can use instance store volumes and that data is gone on reboot or
terminate, or you can put EBS volumes in a particular AZ and mount them on
instances in that AZ), and I don't think that's a bad thing, because it makes
it easy for the users to understand the contract they're getting from the
platform when it comes to where their data is stored and what instances they
can attach it to.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev