On 02/06/2014 02:19 AM, Clint Byrum wrote:
Excerpts from Mike Spreitzer's message of 2014-02-05 22:17:50 -0800:
From: Prasad Vellanki <prasad.vella...@oneconvergence.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>,
Date: 01/21/2014 02:16 AM
Subject: Re: [openstack-dev] [heat] Sofware Config progress

Steve & Clint

That should work. We will look at implementing a resource that spins
up a shortlived VM for bootstrapping a service VM and informing
configuration server for further configuration.

thanks
prasadv

On Wed, Jan 15, 2014 at 7:53 PM, Steven Dake <sd...@redhat.com> wrote:
On 01/14/2014 09:27 PM, Clint Byrum wrote:
Excerpts from Prasad Vellanki's message of 2014-01-14 18:41:46 -0800:
Steve

I did not mean to have custom solution at all. In fact that would be
terrible.  I think Heat model of software config and deployment is
really
good. That allows configurators such as Chef, Puppet, Salt or Ansible to
be
plugged into it and all users need to write are modules for those.

What I was  thinking is if there is a way to use software
config/deployment
   to do initial configuration of the appliance by using agentless system
such  as Ansible or Salt, thus requiring no cfminit. I am not sure this
will work either, since it might require ssh keys to be installed for
getting ssh to work without password prompting. But I do see that
ansible
and salt support username/password option.
If this would not work, I agree that the best option is to make them
support cfminit...
Ansible is not agent-less. It just makes use of an extremely flexible
agent: sshd. :) AFAIK, salt does use an agent though maybe they've added
SSH support.

Anyway, the point is, Heat's engine should not be reaching into your
machines. It talks to API's, but that is about it.

What you really want is just a VM that spins up and does the work for
you and then goes away once it is done.
Good thinking.  This model might work well without introducing the
"groan another daemon" problems pointed out elsewhere in this thread
that were snipped.  Then the "modules" could simply be heat
templates available to the Heat engine to do the custom config setup.

The custom config setup might still be a problem with the original
constraints (not modifying images to inject SSH keys).

That model wfm.

Regards
-steve

(1) What destroys the short-lived VM if the heat engine crashes between
creating and destroying that short-lived VM?

The heat-engine that takes over the stack. Same as the answer for what
happens when a stack is half-created and heat-engine dies.

(2) What if something goes wrong and the heat engine never gets the signal
it is waiting for?

Timeouts already cause failed state or rollback.

(3) This still has the problem that something needs to be configured
some(client-ish)where to support the client authorization solution
(usually username/password).

The usual answer is "that's cloud-init's job" but we're discussing
working around not having cloud-init, so I suspect it has to be built
into the image (which, btw, is a really really bad idea). Another option
is that these weird proprietary systems might reach out to an auth
service which the short-lived VM would also be able to contact given
appropriate credentials for said auth service fed in via parameters.

(4) Given that everybody seems sanguine about solving the client
authorization problem, what is wrong with code in the heat engine opening
and using a connection to code in an appliance?  Steve, what do you mean
by "reaching into your machines" that is critically different from calling
their APIs?

We can, and should, poke holes from heat-engine, out through a firewall,
so it can connect to all of the endpoints. However, if we start letting
it talk to all the managed machines, it becomes a really handy DoS tool
and also spends a ton of time talking to things that we have no control
over, thus taking up resources to an unknown degree.

Heat-engine is precious, it has access to a database with a ton of really
sensitive information. It is also expensive when heat-engine dies (until
we can make all tasks distributed) as it may force failure states. So
I think we need to be very careful about what we let it do.

Just to expand on this, modeling scalability (not that we are doing this, but I expect it will happen in the future) is difficult when one heat engine could be totally bogged down by a bunch of ssh connections while other heat-engines are less busy.

From a security attack vector standpoint, I really don't think it makes sense to open connections to untrusted virtual machines from a service trusted by the openstack RPC infrastructure. I don't know for certain this model could be attacked, but it does create new attack vectors which could potentially crater an entire operator's environment and I'd prefer not to play with that fire.

Regards
-steve

(5) Are we really talking about the same kind of software configuration
here?  Many appliances do not let you SSH into a bash shell and do
whatever you want; they provide only their own API or special command
language over a telnet/ssh sort of connection.  Is hot-software-config
intended to cover that?  Is this what the OneConvergence guys are
concerned with?

No. We are suggesting a solution to their unique problem of having to
talk to said API/special command language/telnet/IP-over-avian-carrier.
The short-lived VM can just have a UserData section which does all of
this really.

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to