Re: [openstack-dev] Unified Guest Agent proposal

2013-12-18 Thread Dmitry Mescheryakov
Clint, do you mean
  * use os-collect-config and its HTTP transport as a base for the PoC
or
  * migrate os-collect-config on PoC after it is implemented on
oslo.messaging

I presume the later, but could you clarify?



2013/12/18 Clint Byrum cl...@fewbar.com

 Excerpts from Dmitry Mescheryakov's message of 2013-12-17 08:01:38 -0800:
  Folks,
 
  The discussion didn't result in a consensus, but it did revealed a great
  number of things to be accounted. I've tried to summarize top-level
 points
  in the etherpad [1]. It lists only items everyone (as it seems to me)
  agrees on, or suggested options where there was no consensus. Let me know
  if i misunderstood or missed something. The etherpad does not list
  advantages/disadvantages of options, otherwise it just would be too long.
  Interested people might search the thread for the arguments :-) .
 
  I've thought it over and I agree with people saying we need to move
  further. Savanna needs the agent and I am going to write a PoC for it.
 Sure
  the PoC will be implemented in project-independent way. I still think
 that
  Salt limitations overweight its advantages, so the PoC will be done on
 top
  of oslo.messaging without Salt. At least we'll have an example on how it
  might look.
 
  Most probably I will have more questions in the process, for instance we
  didn't finish discussion on enabling networking for the agent yet. In
 that
  case I will start a new, more specific thread in the list.

 If you're not going to investigate using salt, can I suggest you base
 your POC on os-collect-config? It it would not take much to add two-way
 communication to it.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-18 Thread Clint Byrum
Excerpts from Dmitry Mescheryakov's message of 2013-12-18 09:32:30 -0800:
 Clint, do you mean
   * use os-collect-config and its HTTP transport as a base for the PoC
 or
   * migrate os-collect-config on PoC after it is implemented on
 oslo.messaging
 
 I presume the later, but could you clarify?
 

os-collect-config speaks two HTTP API's: EC2 metadata and
CloudFormation. I am suggesting that it would be fairly easy to teach
it to also speak oslo.messaging. It currently doesn't have a two-way
communication method, but that is only because we haven't needed that.
It wouldn't be difficult at all to have responders instead of collectors
and send back a response after the command is run.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-17 Thread Dmitry Mescheryakov
Folks,

The discussion didn't result in a consensus, but it did revealed a great
number of things to be accounted. I've tried to summarize top-level points
in the etherpad [1]. It lists only items everyone (as it seems to me)
agrees on, or suggested options where there was no consensus. Let me know
if i misunderstood or missed something. The etherpad does not list
advantages/disadvantages of options, otherwise it just would be too long.
Interested people might search the thread for the arguments :-) .

I've thought it over and I agree with people saying we need to move
further. Savanna needs the agent and I am going to write a PoC for it. Sure
the PoC will be implemented in project-independent way. I still think that
Salt limitations overweight its advantages, so the PoC will be done on top
of oslo.messaging without Salt. At least we'll have an example on how it
might look.

Most probably I will have more questions in the process, for instance we
didn't finish discussion on enabling networking for the agent yet. In that
case I will start a new, more specific thread in the list.

Thanks,

Dmitry

[1] https://etherpad.openstack.org/p/UnifiedGuestAgent
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-17 Thread Thomas Herve

 The discussion didn't result in a consensus, but it did revealed a great
 number of things to be accounted. I've tried to summarize top-level points
 in the etherpad [1]. It lists only items everyone (as it seems to me) agrees
 on, or suggested options where there was no consensus. Let me know if i
 misunderstood or missed something. The etherpad does not list
 advantages/disadvantages of options, otherwise it just would be too long.
 Interested people might search the thread for the arguments :-) .
 
 I've thought it over and I agree with people saying we need to move further.
 Savanna needs the agent and I am going to write a PoC for it. Sure the PoC
 will be implemented in project-independent way. I still think that Salt
 limitations overweight its advantages, so the PoC will be done on top of
 oslo.messaging without Salt. At least we'll have an example on how it might
 look.
 
 Most probably I will have more questions in the process, for instance we
 didn't finish discussion on enabling networking for the agent yet. In that
 case I will start a new, more specific thread in the list.

Hi Dimitri,

While I agree that using Salt's transport may be wrong for us, the module 
system they have is really interesting, and a pretty big ecosystem already. It 
solved things like system-specific information, and it has a simple internal 
API to create modules. Redoing something from scratch Openstack-specific sounds 
like a mistake to me. As Salt seems to be able to work in a standalone mode, I 
think it'd be interesting to investigate that.

Maybe it's worth separating the discussion between how to deliver messages to 
the servers (oslo.messaging, Marconi, etc), and what to do on the servers 
(where I think Salt is a great contender).

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-17 Thread Dmitry Mescheryakov
Hello Thomas,

I do understand your feelings. The problem is there were already many
points raised both pro and contra adopting Salt as an agent. And so far no
consensus was reached on that matter. Maybe someone else is willing to step
out and write a PoC for Salt-based agent? Then we can agree on a
functionality PoC should implement and compare the implementations. The
PoCs also can reveal limitations we currently don't see.

Thanks,

Dmitry




2013/12/17 Thomas Herve thomas.he...@enovance.com


  The discussion didn't result in a consensus, but it did revealed a great
  number of things to be accounted. I've tried to summarize top-level
 points
  in the etherpad [1]. It lists only items everyone (as it seems to me)
 agrees
  on, or suggested options where there was no consensus. Let me know if i
  misunderstood or missed something. The etherpad does not list
  advantages/disadvantages of options, otherwise it just would be too long.
  Interested people might search the thread for the arguments :-) .
 
  I've thought it over and I agree with people saying we need to move
 further.
  Savanna needs the agent and I am going to write a PoC for it. Sure the
 PoC
  will be implemented in project-independent way. I still think that Salt
  limitations overweight its advantages, so the PoC will be done on top of
  oslo.messaging without Salt. At least we'll have an example on how it
 might
  look.
 
  Most probably I will have more questions in the process, for instance we
  didn't finish discussion on enabling networking for the agent yet. In
 that
  case I will start a new, more specific thread in the list.

 Hi Dimitri,

 While I agree that using Salt's transport may be wrong for us, the module
 system they have is really interesting, and a pretty big ecosystem already.
 It solved things like system-specific information, and it has a simple
 internal API to create modules. Redoing something from scratch
 Openstack-specific sounds like a mistake to me. As Salt seems to be able to
 work in a standalone mode, I think it'd be interesting to investigate that.

 Maybe it's worth separating the discussion between how to deliver messages
 to the servers (oslo.messaging, Marconi, etc), and what to do on the
 servers (where I think Salt is a great contender).

 --
 Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-17 Thread Clint Byrum
Excerpts from Dmitry Mescheryakov's message of 2013-12-17 08:01:38 -0800:
 Folks,
 
 The discussion didn't result in a consensus, but it did revealed a great
 number of things to be accounted. I've tried to summarize top-level points
 in the etherpad [1]. It lists only items everyone (as it seems to me)
 agrees on, or suggested options where there was no consensus. Let me know
 if i misunderstood or missed something. The etherpad does not list
 advantages/disadvantages of options, otherwise it just would be too long.
 Interested people might search the thread for the arguments :-) .
 
 I've thought it over and I agree with people saying we need to move
 further. Savanna needs the agent and I am going to write a PoC for it. Sure
 the PoC will be implemented in project-independent way. I still think that
 Salt limitations overweight its advantages, so the PoC will be done on top
 of oslo.messaging without Salt. At least we'll have an example on how it
 might look.
 
 Most probably I will have more questions in the process, for instance we
 didn't finish discussion on enabling networking for the agent yet. In that
 case I will start a new, more specific thread in the list.

If you're not going to investigate using salt, can I suggest you base
your POC on os-collect-config? It it would not take much to add two-way
communication to it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-16 Thread Lars Kellogg-Stedman
On Fri, Dec 13, 2013 at 11:32:01AM -0800, Fox, Kevin M wrote:
 I hadn't thought about that use case, but that does sound like it
 would be a problem.

That, at least, is not much of a problem, because you can block access
to the metadata via a blackhole route or similar after you complete
your initial configuration:

  ip route add blackhole 169.254.169.254 

This prevents access to the metadata unless someone already has root
access on the instance.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ irc
Cloud Engineering / OpenStack  |  @ twitter



pgp4mFXCAneZr.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-16 Thread Fox, Kevin M
The idea being discussed is using 169.254.169.254 for long term messaging 
between a vm and some other process. For example, Trove - TroveVM.

I guess this thread is getting too long. The details are getting lost.

Thanks,
Kevin



From: Lars Kellogg-Stedman [l...@redhat.com]
Sent: Monday, December 16, 2013 8:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On Fri, Dec 13, 2013 at 11:32:01AM -0800, Fox, Kevin M wrote:
 I hadn't thought about that use case, but that does sound like it
 would be a problem.

That, at least, is not much of a problem, because you can block access
to the metadata via a blackhole route or similar after you complete
your initial configuration:

  ip route add blackhole 169.254.169.254

This prevents access to the metadata unless someone already has root
access on the instance.

--
Lars Kellogg-Stedman l...@redhat.com | larsks @ irc
Cloud Engineering / OpenStack  |  @ twitter


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-16 Thread Steven Dake

On 12/16/2013 10:29 AM, Fox, Kevin M wrote:

Yeah, this is similar to what I am proposing. I think we just about have just 
about everything we need already.

Thread started out discussing a slightly different use case then below. The use 
case is processing events like:
User performs backup database B in Trove UI, Trove sends event backup-database with params B to 
the vm, vm response sometime later with done backup database B, Trove UI updates.

The idea is we need a unified agent to receive the messages, perform the action 
and respond back to the event,.

The main issues are, as I see it:
  * The VM might be on a private neutron network only. This is desirable for 
increased security.
  * We want the agent to be minimal so as not to have to maintain much in the 
VM's. Its hard to keep all those ducks in a row.
  * There is a desire not to have the agent allow arbitrary commands to execute 
in the VM for security reasons.
If security is a concern of the unified agent, the best way to reduce 
the attack surface is to limit the number of interactions the agent can 
actually do.  Special purpose code for each operation could easily be 
implemented.


I know salt was mentioned as a possibility to solving this problem, but 
brings a whole host of new problems to content with.


Having a unified agent doesn't mean we can't put special-purpose code 
for each service (eg trove) for each operation (eg backup) in said 
unified agent.  We could even do this using cloud-init using the 
part-handler logic.


We really need someone from the community to step up and drive this 
effort, as opposed to beating this thread into too much complexity, as 
mentioned previously by Clint.


Regards
-steve


Thanks,
Kevin

From: Robert Collins [robe...@robertcollins.net]
Sent: Sunday, December 15, 2013 6:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On 15 December 2013 21:17, Clint Byrum cl...@fewbar.com wrote:

Excerpts from Steven Dake's message of 2013-12-14 09:00:53 -0800:

On 12/13/2013 01:13 PM, Clint Byrum wrote:

Excerpts from Dmitry Mescheryakov's message of 2013-12-13 12:01:01 -0800:

Still, what about one more server process users will have to run? I see
unified agent as library which can be easily adopted by both exiting and
new OpenStack projects. The need to configure and maintain Salt server
process is big burden for end users. That idea will definitely scare off
adoption of the agent. And at the same time what are the gains of having
that server process? I don't really see to many of them.


I tend to agree, I don't see a big advantage to using something like
salt, when the current extremely simplistic cfn-init + friends do the job.

What specific problem does salt solve?  I guess I missed that context in
this long thread.


Yes you missed the crux of the thread. There is a need to have agents that
are _not_ general purpose like cfn-init and friends. They specifically
need to be narrow in focus and not give the higher level service operator
backdoor access to everything via SSH-like control.

So, just spitballing, but:

We have a metadata service.

We want low-latency updates there (e.g. occ listening on long-poll).
Ignore implementation for now.

I assert that agent restrictness is really up to the agent. For
instance, an agent that accepts one command 'do something' with args
'something', is clearly not restricted.

So - mainly to tease requirements out:

How would salt be different to:

- heat-metadata with push notification of updates
- an ORC script that looks for a list of requests in post-configure.d
and executes them.

trove-agent:
  - 'backup':
   db-id: '52'
  - 'backup':
   db-id: '43'
  - 'create':
   db-id: '93'
   initial-schema: [.]

etc.

?


--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-15 Thread Clint Byrum
Excerpts from Steven Dake's message of 2013-12-14 09:00:53 -0800:
 On 12/13/2013 01:13 PM, Clint Byrum wrote:
  Excerpts from Dmitry Mescheryakov's message of 2013-12-13 12:01:01 -0800:
  Still, what about one more server process users will have to run? I see
  unified agent as library which can be easily adopted by both exiting and
  new OpenStack projects. The need to configure and maintain Salt server
  process is big burden for end users. That idea will definitely scare off
  adoption of the agent. And at the same time what are the gains of having
  that server process? I don't really see to many of them.
 
 
 I tend to agree, I don't see a big advantage to using something like 
 salt, when the current extremely simplistic cfn-init + friends do the job.
 
 What specific problem does salt solve?  I guess I missed that context in 
 this long thread.
 

Yes you missed the crux of the thread. There is a need to have agents that
are _not_ general purpose like cfn-init and friends. They specifically
need to be narrow in focus and not give the higher level service operator
backdoor access to everything via SSH-like control.

Salt works with plugins and thus the general purpose backdoor features
can be disabled definitively by not having them present, and then
plugins for Trove/Savanna/et.al can be added. Since they are
operator-controlled services these exotic agent configurations will be
built into operator-controlled images.

For Heat, the advantage is that you get unified transport in/out of
private networks to a general purpose agent which matches the agent for
those higher level services.

  The Salt devs already mentioned that we can more or less just import
  salt's master code and run that inside the existing server processes. So
  Savanna would have a salt master capability, and so would Heat Engine.
 I really don't think we want to introduce a salt executive into the heat 
 engine process address space, even if it is as simple as an import 
 operation.  Sounds like a debugging nightmare!
 

Data is not that hard to collect in this case, so before we call this
complicated or a debugging nightmare I think a bit of discovery would
go a long way. Also the engine is not likely to be where this would be,
existing server processes also includes heat-api, which would make a
lot more sense in this case.

  If it isn't eventlet friendly we can just fork it off and run it as its
  own child. Still better than inventing our own.
 fork/exec is not the answer to scalability and availability I was 
 looking for :)  So, given that we need scale+availability, we are back 
 to managing a daemon outside the address space, which essentially 
 introduces another daemon to be scaled and ha-ified (and documented, 
 etc, see long zookeeper thread for my arguments against new server 
 processes...).  import is not the answer, or atleast it won't be for heat...
 

Fork and run things that don't work well with eventlet does not mean fork
and _exec_. Also this is not to address scalability or availability. It
is to isolate code that does not behave exactly like the rest of our code.

 Salt just seems like more trouble then it is worth, but I don't totally 
 understand the rationale for introducing it as a dependency in this 
 case, and I generally think dependencies are evil :)


I generally think dependencies are efficient ways of consuming existing
code. Should we not use pecan? eventlet?

 What are we inventing our own again of?  cfn-init  friends already 
 exist, are dead simple, and have no need of anything beyond a metadata 
 server.  I would like to see that level of simplicity in any unified agent.
 

Please do read the whole thread or at least the first message. We would
invent a framework for efficient agent communication and plugin based
actions. Right now there are several agents and none of them work quite
like the others but all have the same basic goals. This is only about Heat
because by adopting the same communication protocol we gain in-instance
orchestration in private networks.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-14 Thread Steven Dake

On 12/13/2013 01:13 PM, Clint Byrum wrote:

Excerpts from Dmitry Mescheryakov's message of 2013-12-13 12:01:01 -0800:

Still, what about one more server process users will have to run? I see
unified agent as library which can be easily adopted by both exiting and
new OpenStack projects. The need to configure and maintain Salt server
process is big burden for end users. That idea will definitely scare off
adoption of the agent. And at the same time what are the gains of having
that server process? I don't really see to many of them.



I tend to agree, I don't see a big advantage to using something like 
salt, when the current extremely simplistic cfn-init + friends do the job.


What specific problem does salt solve?  I guess I missed that context in 
this long thread.



The Salt devs already mentioned that we can more or less just import
salt's master code and run that inside the existing server processes. So
Savanna would have a salt master capability, and so would Heat Engine.
I really don't think we want to introduce a salt executive into the heat 
engine process address space, even if it is as simple as an import 
operation.  Sounds like a debugging nightmare!



If it isn't eventlet friendly we can just fork it off and run it as its
own child. Still better than inventing our own.
fork/exec is not the answer to scalability and availability I was 
looking for :)  So, given that we need scale+availability, we are back 
to managing a daemon outside the address space, which essentially 
introduces another daemon to be scaled and ha-ified (and documented, 
etc, see long zookeeper thread for my arguments against new server 
processes...).  import is not the answer, or atleast it won't be for heat...


Salt just seems like more trouble then it is worth, but I don't totally 
understand the rationale for introducing it as a dependency in this 
case, and I generally think dependencies are evil :)


What are we inventing our own again of?  cfn-init  friends already 
exist, are dead simple, and have no need of anything beyond a metadata 
server.  I would like to see that level of simplicity in any unified agent.


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Scott Moser
On Tue, 10 Dec 2013, Ian Wells wrote:

 On 10 December 2013 20:55, Clint Byrum cl...@fewbar.com wrote:

  If it is just a network API, it works the same for everybody. This
  makes it simpler, and thus easier to scale out independently of compute
  hosts. It is also something we already support and can very easily expand
  by just adding a tiny bit of functionality to neutron-metadata-agent.
 
  In fact we can even push routes via DHCP to send agent traffic through
  a different neutron-metadata-agent, so I don't see any issue where we
  are piling anything on top of an overstressed single resource. We can
  have neutron route this traffic directly to the Heat API which hosts it,
  and that can be load balanced and etc. etc. What is the exact scenario
  you're trying to avoid?
 

 You may be making even this harder than it needs to be.  You can create
 multiple networks and attach machines to multiple networks.  Every point so
 far has been 'why don't we use idea as a backdoor into our VM without
 affecting the VM in any other way' - why can't that just be one more
 network interface set aside for whatever management  instructions are
 appropriate?  And then what needs pushing into Neutron is nothing more
 complex than strong port firewalling to prevent the slaves/minions talking
 to each other.  If you absolutely must make the communication come from a

+1

tcp/ip works *really* well as a communication mechanism.  I'm planning on
using it to send this email.

For controlled guests, simply don't break your networking.  Anything that
could break networking can break /dev/hypervisor-socket also.

Fwiw, we already have an extremely functional agent in just about every
[linux] node in sshd.  Its capable of marshalling just about anything in
and out of the node. (note, i fully realize there are good reasons for
more specific agent, lots of them exist).

I've really never understood we don't want to rely on networking as a
transport.

 system agent and go to a VM, then that can be done by attaching the system
 agent to the administrative network - from within the system agent, which
 is the thing that needs this, rather than within Neutron, which doesn't
 really care how you use its networks.  I prefer solutions where other tools
 don't have to make you a special case.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Clint Byrum
Excerpts from Scott Moser's message of 2013-12-13 06:28:08 -0800:
 On Tue, 10 Dec 2013, Ian Wells wrote:
 
  On 10 December 2013 20:55, Clint Byrum cl...@fewbar.com wrote:
 
   If it is just a network API, it works the same for everybody. This
   makes it simpler, and thus easier to scale out independently of compute
   hosts. It is also something we already support and can very easily expand
   by just adding a tiny bit of functionality to neutron-metadata-agent.
  
   In fact we can even push routes via DHCP to send agent traffic through
   a different neutron-metadata-agent, so I don't see any issue where we
   are piling anything on top of an overstressed single resource. We can
   have neutron route this traffic directly to the Heat API which hosts it,
   and that can be load balanced and etc. etc. What is the exact scenario
   you're trying to avoid?
  
 
  You may be making even this harder than it needs to be.  You can create
  multiple networks and attach machines to multiple networks.  Every point so
  far has been 'why don't we use idea as a backdoor into our VM without
  affecting the VM in any other way' - why can't that just be one more
  network interface set aside for whatever management  instructions are
  appropriate?  And then what needs pushing into Neutron is nothing more
  complex than strong port firewalling to prevent the slaves/minions talking
  to each other.  If you absolutely must make the communication come from a
 
 +1
 
 tcp/ip works *really* well as a communication mechanism.  I'm planning on
 using it to send this email.
 
 For controlled guests, simply don't break your networking.  Anything that
 could break networking can break /dev/hypervisor-socket also.
 

Who discussed breaking networking?

 Fwiw, we already have an extremely functional agent in just about every
 [linux] node in sshd.  Its capable of marshalling just about anything in
 and out of the node. (note, i fully realize there are good reasons for
 more specific agent, lots of them exist).
 

This was already covered way back in the thread. sshd is a backdoor
agent, and thus undesirable for this purpose. Locking it down is more
effort than adopting an agent which is meant to be limited to specific
tasks.

Also SSH is a push agent, so Savanna/Heat/Trove would have to find the
VM, and reach into it to do things. A pull agent scales well because you
only have to tell the nodes where to pull things from, and then you can
add more things to pull from behind that endpoint without having to
update the nodes.

 I've really never understood we don't want to rely on networking as a
 transport.
 

You may have gone to plaid with this one. Not sure what you mean. AFAICT
the direct-to-hypervisor tricks are not exactly popular in this thread.
Were you referring to something else?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Sergey Lukjanov
Hi Alessandro,

it's a good idea to setup an IRC meeting for the unified agents. IMO it'll
seriously speedup discussion. The first one could be used to determine the
correct direction, then we can use them to discuss details and coordinate
efforts, it will be necessary regardless of the approach.

Thanks.


On Fri, Dec 13, 2013 at 7:13 PM, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:

 Hi guys,

 This seems to become a pretty long thread with quite a lot of ideas. What
 do you think about setting up a meeting on IRC to talk about what direction
 to take?
 IMO this has the potential of becoming a completely separated project to
 be hosted on stackforge or similar.

 Generally speaking, we already use Cloudbase-Init, which beside being the
 de facto standard Windows Cloud-Init type feature” (Apache 2 licensed)
 has been recently used as a base to provide the same functionality on
 FreeBSD.

 For reference: https://github.com/cloudbase/cloudbase-init and
 http://www.cloudbase.it/cloud-init-for-windows-instances/

 We’re seriously thinking if we should transform Cloudbase-init into an
 agent or if we should keep it on line with the current “init only, let the
 guest to the rest” approach which fits pretty
 well with the most common deployment approaches (Heat, Puppet / Chef,
 Salt, etc). Last time I spoke with Scott about this agent stuff for
 cloud-init, the general intention was
 to keep the init approach as well (please correct me if I missed something
 in the meantime).

 The limitations that we see, independently from which direction and tool
 will be adopted for the agent, are mainly in the metadata services and the
 way OpenStack users employ them to
 communicate with Nova, Heat and the rest of the pack as orchestration
 requirements complexity increases:

 1) We need a way to post back small amounts of data (e.g. like we already
 do for the encrypted Windows password) for status updates,
 so that the users know how things are going and can be properly notified
 in case of post-boot errors. This might be irrelevant as long as you just
 create a user and deploy some SSH keys,
 but becomes very important for most orchestration templates.

 2) The HTTP metadata service accessible from the guest with its magic
 number is IMO quite far from an optimal solution. Since every hypervisor
 commonly
 used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest /
 host communication services, we could define a common abstraction layer
 which will
 include a guest side (to be included in cloud-init, cloudbase-init, etc)
 and a hypervisor side, to be implemented for each hypervisor and included
 in the related Nova drivers.
 This has already been proposed / implemented in various third party
 scenarios, but never under the OpenStack umbrella for multiple hypervisors.

 Metadata info can be at that point retrieved and posted by the Nova driver
 in a secure way and proxied to / from the guest whithout needing to expose
 the metadata
 service to the guest itself. This would also simplify Neutron, as we could
 get rid of the complexity of the Neutron metadata proxy.



 Alessandro


 On 13 Dec 2013, at 16:28 , Scott Moser smo...@ubuntu.com wrote:

  On Tue, 10 Dec 2013, Ian Wells wrote:
 
  On 10 December 2013 20:55, Clint Byrum cl...@fewbar.com wrote:
 
  If it is just a network API, it works the same for everybody. This
  makes it simpler, and thus easier to scale out independently of compute
  hosts. It is also something we already support and can very easily
 expand
  by just adding a tiny bit of functionality to neutron-metadata-agent.
 
  In fact we can even push routes via DHCP to send agent traffic through
  a different neutron-metadata-agent, so I don't see any issue where we
  are piling anything on top of an overstressed single resource. We can
  have neutron route this traffic directly to the Heat API which hosts
 it,
  and that can be load balanced and etc. etc. What is the exact scenario
  you're trying to avoid?
 
 
  You may be making even this harder than it needs to be.  You can create
  multiple networks and attach machines to multiple networks.  Every
 point so
  far has been 'why don't we use idea as a backdoor into our VM without
  affecting the VM in any other way' - why can't that just be one more
  network interface set aside for whatever management  instructions are
  appropriate?  And then what needs pushing into Neutron is nothing more
  complex than strong port firewalling to prevent the slaves/minions
 talking
  to each other.  If you absolutely must make the communication come from
 a
 
  +1
 
  tcp/ip works *really* well as a communication mechanism.  I'm planning on
  using it to send this email.
 
  For controlled guests, simply don't break your networking.  Anything that
  could break networking can break /dev/hypervisor-socket also.
 
  Fwiw, we already have an extremely functional agent in just about every
  [linux] node in sshd.  Its capable of marshalling 

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Ian Wells
On 13 December 2013 16:13, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:

 2) The HTTP metadata service accessible from the guest with its magic
 number is IMO quite far from an optimal solution. Since every hypervisor
 commonly
 used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest /
 host communication services, we could define a common abstraction layer
 which will
 include a guest side (to be included in cloud-init, cloudbase-init, etc)
 and a hypervisor side, to be implemented for each hypervisor and included
 in the related Nova drivers.
 This has already been proposed / implemented in various third party
 scenarios, but never under the OpenStack umbrella for multiple hypervisors.


Firstly, what's wrong with the single anycast IP address mechanism that
makes it 'not an optimal solution'?

While I agree we could, theoretically, make KVM, Xen, Docker, Hyper-V,
VMWare and so on all implement the same backdoor mechanism - unlikely as
that seems - and then implement a userspace mechanism to match in every
cloud-init service in Windows, Linux, *BSD (and we then have a problem with
niche OSes, too, so this mechanism had better be easy to implement, and
it's likely to involve the kernel) it's hard.  And we still come unstuck
when we get to bare metal, because these interfaces just can't be added
there.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Clint Byrum
Excerpts from Alessandro Pilotti's message of 2013-12-13 07:13:01 -0800:
 Hi guys,
 
 This seems to become a pretty long thread with quite a lot of ideas. What do 
 you think about setting up a meeting on IRC to talk about what direction to 
 take?
 IMO this has the potential of becoming a completely separated project to be 
 hosted on stackforge or similar.
 
 Generally speaking, we already use Cloudbase-Init, which beside being the de 
 facto standard Windows Cloud-Init type feature” (Apache 2 licensed) 
 has been recently used as a base to provide the same functionality on FreeBSD.
 
 For reference: https://github.com/cloudbase/cloudbase-init and 
 http://www.cloudbase.it/cloud-init-for-windows-instances/
 
 We’re seriously thinking if we should transform Cloudbase-init into an agent 
 or if we should keep it on line with the current “init only, let the guest to 
 the rest” approach which fits pretty
 well with the most common deployment approaches (Heat, Puppet / Chef, Salt, 
 etc). Last time I spoke with Scott about this agent stuff for cloud-init, the 
 general intention was
 to keep the init approach as well (please correct me if I missed something in 
 the meantime).
 
 The limitations that we see, independently from which direction and tool will 
 be adopted for the agent, are mainly in the metadata services and the way 
 OpenStack users employ them to 
 communicate with Nova, Heat and the rest of the pack as orchestration 
 requirements complexity increases:
 

Hi, Allessandro. Really interesting thoughts. Most of what you have
described that is not about agent transport is what we discussed
at the Icehouse summit under the topic of the hot-software-config
blueprint. There is definitely a need for better workflow integration
in Heat, and that work is happening now.

 1) We need a way to post back small amounts of data (e.g. like we already do 
 for the encrypted Windows password) for status updates,
 so that the users know how things are going and can be properly notified in 
 case of post-boot errors. This might be irrelevant as long as you just create 
 a user and deploy some SSH keys,
 but becomes very important for most orchestration templates.


Heat already has this via wait conditions. hot-software-config will
improve upon this. I believe once a unified guest agent protocol is
agreed upon we will make Heat use that for wait condition signalling.

 2) The HTTP metadata service accessible from the guest with its magic number 
 is IMO quite far from an optimal solution. Since every hypervisor commonly 
 used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest / host 
 communication services, we could define a common abstraction layer which will 
 include a guest side (to be included in cloud-init, cloudbase-init, etc) and 
 a hypervisor side, to be implemented for each hypervisor and included in the 
 related Nova drivers.
 This has already been proposed / implemented in various third party 
 scenarios, but never under the OpenStack umbrella for multiple hypervisors.
 
 Metadata info can be at that point retrieved and posted by the Nova driver in 
 a secure way and proxied to / from the guest whithout needing to expose the 
 metadata 
 service to the guest itself. This would also simplify Neutron, as we could 
 get rid of the complexity of the Neutron metadata proxy. 
 

The neutron metadata proxy is actually relatively simple. Have a look at
it. The basic way it works in pseudo code is:

port = lookup_requesting_ip_port(remote_ip)
instance_id = lookup_port_instance_id(port)
response = forward_and_sign_request_to_nova(REQUEST, instance_id, 
conf.nova_metadata_ip)
return response

Furthermore, if we have to embrace some complexity, I would rather do so
inside Neutron than in an agent that users must install and make work
on every guest OS.

The dumber an agent is, the better it will scale and more resilient it
will be. I would credit this principle with the success of cloud-init
(sorry, you know I love you Scott! ;). What we're talking about now is
having an equally dumb, but differently focused agent.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Clint Byrum
Excerpts from Sergey Lukjanov's message of 2013-12-13 07:46:34 -0800:
 Hi Alessandro,
 
 it's a good idea to setup an IRC meeting for the unified agents. IMO it'll
 seriously speedup discussion. The first one could be used to determine the
 correct direction, then we can use them to discuss details and coordinate
 efforts, it will be necessary regardless of the approach.
 

I'd like for those who are going to do the actual work to stand up and
be counted before an IRC meeting. This is starting to feel bike-sheddy
and the answer to bike-shedding is not more meetings.

I am keenly interested in this, but have limited cycles to spare for it
at this time. So I do not count myself as one of those people.

I believe that a few individuals who are involved with already working
specialized agents will be doing the work to consolidate them and to fix
the bug that they all share (Heat shares this too) which is that private
networks cannot reach their respective agent endpoints. I think those
individuals should review the original spec given the new information,
revise it, and present it here in a new thread. If there are enough of
them that they feel they should have a meeting, I suggest they organize
one. But I do not think we need more discussion on a broad scale.

Speaking of that, before I run out and report a bug that affects
Savanna Heat and Trove, is there already a bug titled something like
Guests cannot reach [Heat/Savanna/Trove] endpoints from inside private
networks. ?

(BTW, paint it yellow!)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Fox, Kevin M
That's a great idea. How about the proposal below be changed such that the 
metadata-proxy forwards the /connect like calls to marconi queue A, and the 
response like ur's go to queue B.

The agent wouldn't need to know which queue's in marconi its talking to then, 
and could always talk to it.

Any of the servers (savana/trove) that wanted to control the agents would then 
just have to push marconi into queue A and get responses from queue B.

http is then used all the way through the process, which should make things 
easy to implement and scale.

Thanks,
Kevin


From: Sylvain Bauza [sylvain.ba...@gmail.com]
Sent: Thursday, December 12, 2013 11:43 PM
To: OpenStack Development Mailing List, (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

Why the notifications couldn't be handled by Marconi ?

That would be up to Marconi's team to handle security issues while it is part 
of their mission statement to deliver a messaging service in between VMs.

Le 12 déc. 2013 22:09, Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.gov a écrit :
Yeah, I think the extra nic is unnecessary too. There already is a working 
route to 169.254.169.254, and a metadata proxy - server running on it.

So... lets brainstorm for a minute and see if there are enough pieces already 
to do most of the work.

We already have:
  * An http channel out from private vm's, past network namespaces all the way 
to the node running the neutron-metadata-agent.

We need:
  * Some way to send a command, plus arguments to the vm to execute some action 
and get a response back.

OpenStack has focused on REST api's for most things and I think that is a great 
tradition to continue. This allows the custom agent plugins to be written in 
any language that can speak http (All of them?) on any platform.

A REST api running in the vm wouldn't be accessible from the outside though on 
a private network.

Random thought, can some glue unified guest agent be written to bridge the 
gap?

How about something like the following:

The unified guest agent starts up, makes an http request to 
169.254.169.254/unified-agent/http://169.254.169.254/unified-agent/cnc_type_from_configfile/connect
If at any time the connection returns, it will auto reconnect.
It will block as long as possible and the data returned will be an http 
request. The request will have a special header with a request id.
The http request will be forwarded to localhost:someportfromconfigfile and 
the response will be posted to 
169.254.169.254/unified-agent/cnc_type/response/http://169.254.169.254/unified-agent/cnc_type/response/response_id

The neutron-proxy-server would need to be modified slightly so that, if it sees 
a /unified-agent/cnc_type/* request it:
looks in its config file, unified-agent section, and finds the ip/port to 
contact for a given cnc_type', and forwards the request to that server, 
instead of the regular metadata one.

Once this is in place, savana or trove can have their webapi registered with 
the proxy as the server for the savana or trove cnc_type. They will be 
contacted by the clients as they come up, and will be able to make web requests 
to them, an get responses back.

What do you think?

Thanks,
Kevin


From: Ian Wells [ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk]
Sent: Thursday, December 12, 2013 11:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On 12 December 2013 19:48, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.commailto:cl...@fewbar.commailto:cl...@fewbar.com
 wrote:
Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
 On 12/10/2013 03:49 PM, Ian Wells wrote:
  On 10 December 2013 20:55, Clint Byrum 
  cl...@fewbar.commailto:cl...@fewbar.commailto:cl...@fewbar.commailto:cl...@fewbar.com
  mailto:cl...@fewbar.commailto:cl...@fewbar.commailto:cl...@fewbar.commailto:cl...@fewbar.com
   wrote:
 I've read through this email thread with quite a bit of curiosity, and I
 have to say what Ian says above makes a lot of sense to me. If Neutron
 can handle the creation of a management vNIC that has some associated
 iptables rules governing it that provides a level of security for guest
 - host and guest - $OpenStackService, then the transport problem
 domain is essentially solved, and Neutron can be happily ignorant (as it
 should be) of any guest agent communication with anything else.


Indeed I think it could work, however I think the NIC is unnecessary.

Seems likely even with a second NIC that said address will be something
like 169.254.169.254 (or the ipv6 equivalent?).

There *is* no ipv6 equivalent, which is one standing problem.  Another is that 
(and admittedly you can quibble about this problem's significance) you need a 
router on a network to be able to get to 169.254.169.254 - I raise that because 
the obvious use case for multiple

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Sylvain Bauza
That's exactly why I proposed Marconi :
 - Notifications ('Marconi') is already an incubated Openstack program and
consequently we need to envisage any already existing solutions in the
Openstack ecosystem before writing a new one (aka. silos...)
 - Salt and other any other solutions are good but not perfect as we would
then have only one broker for a solution, with all the disagreements it
could raise (releases roll-out and backwards compatibility, vendor lock-in,
light integration with Openstack...)

Is there an Etherpad for discussing this btw ? Meetings are great but
pretty useless if we need to discuss such of design keypoints right there.




2013/12/13 Fox, Kevin M kevin@pnnl.gov

 That's a great idea. How about the proposal below be changed such that the
 metadata-proxy forwards the /connect like calls to marconi queue A, and the
 response like ur's go to queue B.

 The agent wouldn't need to know which queue's in marconi its talking to
 then, and could always talk to it.

 Any of the servers (savana/trove) that wanted to control the agents would
 then just have to push marconi into queue A and get responses from queue B.

 http is then used all the way through the process, which should make
 things easy to implement and scale.

 Thanks,
 Kevin

 
 From: Sylvain Bauza [sylvain.ba...@gmail.com]
 Sent: Thursday, December 12, 2013 11:43 PM
 To: OpenStack Development Mailing List, (not for usage questions)
 Subject: Re: [openstack-dev] Unified Guest Agent proposal

 Why the notifications couldn't be handled by Marconi ?

 That would be up to Marconi's team to handle security issues while it is
 part of their mission statement to deliver a messaging service in between
 VMs.

 Le 12 déc. 2013 22:09, Fox, Kevin M kevin@pnnl.govmailto:
 kevin@pnnl.gov a écrit :
 Yeah, I think the extra nic is unnecessary too. There already is a working
 route to 169.254.169.254, and a metadata proxy - server running on it.

 So... lets brainstorm for a minute and see if there are enough pieces
 already to do most of the work.

 We already have:
   * An http channel out from private vm's, past network namespaces all the
 way to the node running the neutron-metadata-agent.

 We need:
   * Some way to send a command, plus arguments to the vm to execute some
 action and get a response back.

 OpenStack has focused on REST api's for most things and I think that is a
 great tradition to continue. This allows the custom agent plugins to be
 written in any language that can speak http (All of them?) on any platform.

 A REST api running in the vm wouldn't be accessible from the outside
 though on a private network.

 Random thought, can some glue unified guest agent be written to bridge
 the gap?

 How about something like the following:

 The unified guest agent starts up, makes an http request to
 169.254.169.254/unified-agent/http://169.254.169.254/unified-agent/
 cnc_type_from_configfile/connect
 If at any time the connection returns, it will auto reconnect.
 It will block as long as possible and the data returned will be an http
 request. The request will have a special header with a request id.
 The http request will be forwarded to localhost:someportfromconfigfile
 and the response will be posted to
 169.254.169.254/unified-agent/cnc_type/response/
 http://169.254.169.254/unified-agent/cnc_type/response/response_id

 The neutron-proxy-server would need to be modified slightly so that, if it
 sees a /unified-agent/cnc_type/* request it:
 looks in its config file, unified-agent section, and finds the ip/port to
 contact for a given cnc_type', and forwards the request to that server,
 instead of the regular metadata one.

 Once this is in place, savana or trove can have their webapi registered
 with the proxy as the server for the savana or trove cnc_type. They
 will be contacted by the clients as they come up, and will be able to make
 web requests to them, an get responses back.

 What do you think?

 Thanks,
 Kevin

 
 From: Ian Wells [ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk]
 Sent: Thursday, December 12, 2013 11:02 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Unified Guest Agent proposal

 On 12 December 2013 19:48, Clint Byrum cl...@fewbar.commailto:
 cl...@fewbar.commailto:cl...@fewbar.commailto:cl...@fewbar.com
 wrote:
 Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
  On 12/10/2013 03:49 PM, Ian Wells wrote:
   On 10 December 2013 20:55, Clint Byrum cl...@fewbar.commailto:
 cl...@fewbar.commailto:cl...@fewbar.commailto:cl...@fewbar.com
   mailto:cl...@fewbar.commailto:cl...@fewbar.commailto:
 cl...@fewbar.commailto:cl...@fewbar.com wrote:
  I've read through this email thread with quite a bit of curiosity, and I
  have to say what Ian says above makes a lot of sense to me. If Neutron
  can handle the creation of a management vNIC that has some

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Alessandro Pilotti
18:39 , Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Alessandro Pilotti's message of 2013-12-13 07:13:01 -0800:
 Hi guys,
 
 This seems to become a pretty long thread with quite a lot of ideas. What do 
 you think about setting up a meeting on IRC to talk about what direction to 
 take?
 IMO this has the potential of becoming a completely separated project to be 
 hosted on stackforge or similar.
 
 Generally speaking, we already use Cloudbase-Init, which beside being the de 
 facto standard Windows Cloud-Init type feature” (Apache 2 licensed) 
 has been recently used as a base to provide the same functionality on 
 FreeBSD.
 
 For reference: https://github.com/cloudbase/cloudbase-init and 
 http://www.cloudbase.it/cloud-init-for-windows-instances/
 
 We’re seriously thinking if we should transform Cloudbase-init into an agent 
 or if we should keep it on line with the current “init only, let the guest 
 to the rest” approach which fits pretty
 well with the most common deployment approaches (Heat, Puppet / Chef, Salt, 
 etc). Last time I spoke with Scott about this agent stuff for cloud-init, 
 the general intention was
 to keep the init approach as well (please correct me if I missed something 
 in the meantime).
 
 The limitations that we see, independently from which direction and tool 
 will be adopted for the agent, are mainly in the metadata services and the 
 way OpenStack users employ them to 
 communicate with Nova, Heat and the rest of the pack as orchestration 
 requirements complexity increases:
 
 
 Hi, Allessandro. Really interesting thoughts. Most of what you have
 described that is not about agent transport is what we discussed
 at the Icehouse summit under the topic of the hot-software-config
 blueprint. There is definitely a need for better workflow integration
 in Heat, and that work is happening now.
 

This is great news. I was aware about this effort but didn’t know that it’s 
already in such an advanced stage. Looking forward to check it out these days!

 1) We need a way to post back small amounts of data (e.g. like we already do 
 for the encrypted Windows password) for status updates,
 so that the users know how things are going and can be properly notified in 
 case of post-boot errors. This might be irrelevant as long as you just 
 create a user and deploy some SSH keys,
 but becomes very important for most orchestration templates.
 
 
 Heat already has this via wait conditions. hot-software-config will
 improve upon this. I believe once a unified guest agent protocol is
 agreed upon we will make Heat use that for wait condition signalling.
 
 2) The HTTP metadata service accessible from the guest with its magic number 
 is IMO quite far from an optimal solution. Since every hypervisor commonly 
 used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest / host 
 communication services, we could define a common abstraction layer which 
 will 
 include a guest side (to be included in cloud-init, cloudbase-init, etc) and 
 a hypervisor side, to be implemented for each hypervisor and included in the 
 related Nova drivers.
 This has already been proposed / implemented in various third party 
 scenarios, but never under the OpenStack umbrella for multiple hypervisors.
 
 Metadata info can be at that point retrieved and posted by the Nova driver 
 in a secure way and proxied to / from the guest whithout needing to expose 
 the metadata 
 service to the guest itself. This would also simplify Neutron, as we could 
 get rid of the complexity of the Neutron metadata proxy. 
 
 
 The neutron metadata proxy is actually relatively simple. Have a look at
 it. The basic way it works in pseudo code is:
 
 port = lookup_requesting_ip_port(remote_ip)
 instance_id = lookup_port_instance_id(port)
 response = forward_and_sign_request_to_nova(REQUEST, instance_id, 
 conf.nova_metadata_ip)
 return response
 

Heh, I’m quite familiar with the Neutron metadata agent, as we had to patch it 
to get metadata POST working for the Windows password generation. :-)

IMO, metadata exposed to guests via HTTP suffers from security issues due to 
direct exposure to guests (think DOS in the best case) and requires additional 
complexity for fault tolerance 
and high availability just to name a few issues.
Beside that, folks that embraced ConfigDrive for this or other reasons are cut 
out from the metadata POST option, as by definition a CDROM drive is read only.

I was sure that this was going to ge a bit of a hot topic ;). There are IMHO 
valid arguments on both sides, I don’t even see it as a mandatory alternative 
choice,
just one additional option which is being discussed since a while. 

The design and implementation IMO would be fairly easy, with the big advantage 
that it would remove most of the complexity from the deployers.

 Furthermore, if we have to embrace some complexity, I would rather do so
 inside Neutron than in an agent that users must install and make work
 on every guest 

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Scott Moser
On Fri, 13 Dec 2013, Fox, Kevin M wrote:

 Hmm.. so If I understand right, the concern you started is something like:
  * You start up a vm
  * You make it available to your users to ssh into
  * They could grab the machine's metadata

 I hadn't thought about that use case, but that does sound like it would be a 
 problem.

 Ok, so... the problem there is that you need a secrets passed to the vm
 but the network trick isn't secure enough to pass the secret, hence the
 config drive like trick since only root/admin can read the data.

 Now, that does not sound like it excludes the possibility of using the
 metadata server idea in combination with cloud drive to make things
 secure. You could use cloud drive to pass a cert, and then have the
 metadata server require that cert in order to ensure only the vm itself
 can pull any additional metadata.

 The unified guest agent could use the same cert/server to establish trust too.

For what its worth, the same general problem is solved by just putting a
null route to the metadata service. cloud-init has a config option for
doing this.  After route has put such a route in place, you should
effectively be done.

  
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt
  # remove access to the ec2 metadata service early in boot via null route
  #  the null route can be removed (by root) with:
  #route del -host 169.254.169.254 reject
  # default: false (service available)
  disable_ec2_metadata: true

I've also considered before that it might be useful for the instance to
make a request to the metadata service that its done and that the data can
now be deleted.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Dmitry Mescheryakov
Clint, Kevin,

Thanks for reassuring me :-) I just wanted to make sure that having direct
access from VMs to a single facility is not a dead end in terms of security
and extensibility. And since it is not, I agree it is much simpler (and
hence better) than hypervisor-dependent design.


Then returning to two major suggestions made:
 * Salt
 * Custom solution specific to our needs

The custom solution could be made on top of oslo.messaging. That gives us
RPC working on different messaging systems. And that is what we really need
- an RPC into guest supporting various transports. What it lacks at the
moment is security - it has neither authentication nor ACL.

Salt also provides RPC service, but it has a couple of disadvantages: it is
tightly coupled with ZeroMQ and it needs a server process to run. A single
transport option (ZeroMQ) is a limitation we really want to avoid.
OpenStack could be deployed with various messaging providers, and we can't
limit the choice to a single option in the guest agent. Though it could be
changed in the future, it is an obstacle to consider.

Running yet another server process within OpenStack, as it was already
pointed out, is expensive. It means another server to deploy and take care
of, +1 to overall OpenStack complexity. And it does not look it could be
fixed any time soon.

For given reasons I give favor to an agent based on oslo.messaging.

Thanks,

Dmitry



2013/12/11 Fox, Kevin M kevin@pnnl.gov

 Yeah. Its likely that the metadata server stuff will get more
 scalable/hardened over time. If it isn't enough now, lets fix it rather
 then coming up with a new system to work around it.

 I like the idea of using the network since all the hypervisors have to
 support network drivers already. They also already have to support talking
 to the metadata server. This keeps OpenStack out of the hypervisor driver
 business.

 Kevin

 
 From: Clint Byrum [cl...@fewbar.com]
 Sent: Tuesday, December 10, 2013 1:02 PM
 To: openstack-dev
 Subject: Re: [openstack-dev] Unified Guest Agent proposal

 Excerpts from Dmitry Mescheryakov's message of 2013-12-10 12:37:37 -0800:
   What is the exact scenario you're trying to avoid?
 
  It is DDoS attack on either transport (AMQP / ZeroMQ provider) or server
  (Salt / Our own self-written server). Looking at the design, it doesn't
  look like the attack could be somehow contained within a tenant it is
  coming from.
 

 We can push a tenant-specific route for the metadata server, and a tenant
 specific endpoint for in-agent things. Still simpler than hypervisor-aware
 guests. I haven't seen anybody ask for this yet, though I'm sure if they
 run into these problems it will be the next logical step.

  In the current OpenStack design I see only one similarly vulnerable
  component - metadata server. Keeping that in mind, maybe I just
  overestimate the threat?
 

 Anything you expose to the users is vulnerable. By using the localized
 hypervisor scheme you're now making the compute node itself vulnerable.
 Only now you're asking that an already complicated thing (nova-compute)
 add another job, rate limiting.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Dmitry Mescheryakov
Vladik,

Thanks for the suggestion, but hypervisor-dependent solution is exactly
what scares off people in the thread :-)

Thanks,

Dmitry



2013/12/11 Vladik Romanovsky vladik.romanov...@enovance.com


 Maybe it will be useful to use Ovirt guest agent as a base.

 http://www.ovirt.org/Guest_Agent
 https://github.com/oVirt/ovirt-guest-agent

 It is already working well on linux and windows and has a lot of
 functionality.
 However, currently it is using virtio-serial for communication, but I
 think it can be extended for other bindings.

 Vladik

 - Original Message -
  From: Clint Byrum cl...@fewbar.com
  To: openstack-dev openstack-dev@lists.openstack.org
  Sent: Tuesday, 10 December, 2013 4:02:41 PM
  Subject: Re: [openstack-dev] Unified Guest Agent proposal
 
  Excerpts from Dmitry Mescheryakov's message of 2013-12-10 12:37:37 -0800:
What is the exact scenario you're trying to avoid?
  
   It is DDoS attack on either transport (AMQP / ZeroMQ provider) or
 server
   (Salt / Our own self-written server). Looking at the design, it doesn't
   look like the attack could be somehow contained within a tenant it is
   coming from.
  
 
  We can push a tenant-specific route for the metadata server, and a tenant
  specific endpoint for in-agent things. Still simpler than
 hypervisor-aware
  guests. I haven't seen anybody ask for this yet, though I'm sure if they
  run into these problems it will be the next logical step.
 
   In the current OpenStack design I see only one similarly vulnerable
   component - metadata server. Keeping that in mind, maybe I just
   overestimate the threat?
  
 
  Anything you expose to the users is vulnerable. By using the localized
  hypervisor scheme you're now making the compute node itself vulnerable.
  Only now you're asking that an already complicated thing (nova-compute)
  add another job, rate limiting.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Clint Byrum
Excerpts from Dmitry Mescheryakov's message of 2013-12-12 09:24:13 -0800:
 Clint, Kevin,
 
 Thanks for reassuring me :-) I just wanted to make sure that having direct
 access from VMs to a single facility is not a dead end in terms of security
 and extensibility. And since it is not, I agree it is much simpler (and
 hence better) than hypervisor-dependent design.
 
 
 Then returning to two major suggestions made:
  * Salt
  * Custom solution specific to our needs
 
 The custom solution could be made on top of oslo.messaging. That gives us
 RPC working on different messaging systems. And that is what we really need
 - an RPC into guest supporting various transports. What it lacks at the
 moment is security - it has neither authentication nor ACL.
 

I bet salt would be super open to modularizing their RPC. Since
oslo.messaging includes ZeroMQ, and is a library now, I see no reason to
avoid opening that subject with our fine friends in the Salt community.
Perhaps a few of them are even paying attention right here. :)

The benefit there is that we get everything except the plugins we want
to write already done. And we could start now with the ZeroMQ-only
salt agent if we could at least get an agreement on principle that Salt
wouldn't mind using an abstraction layer for RPC.

That does make the poke a hole out of private networks conversation
_slightly_ more complex. It is one thing to just let ZeroMQ out, another
to let all of oslo.messaging's backends out. But I think in general
they'll all share the same thing: you want an address+port to be routed
intelligently out of the private network into something running under
the cloud.

Next steps (all can be done in parallel, as all are interdependent):

* Ask Salt if oslo.messaging is a path they'll walk with us
* Experiment with communicating with salt agents from an existing
  OpenStack service (Savanna, Trove, Heat, etc)
* Deep-dive into Salt to see if it is feasible

As I have no cycles for this, I can't promise to do any, but I will
try to offer assistance if I can.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Jay Pipes

On 12/10/2013 03:49 PM, Ian Wells wrote:

On 10 December 2013 20:55, Clint Byrum cl...@fewbar.com
mailto:cl...@fewbar.com wrote:

If it is just a network API, it works the same for everybody. This
makes it simpler, and thus easier to scale out independently of compute
hosts. It is also something we already support and can very easily
expand
by just adding a tiny bit of functionality to neutron-metadata-agent.

In fact we can even push routes via DHCP to send agent traffic through
a different neutron-metadata-agent, so I don't see any issue where we
are piling anything on top of an overstressed single resource. We can
have neutron route this traffic directly to the Heat API which hosts it,
and that can be load balanced and etc. etc. What is the exact scenario
you're trying to avoid?


You may be making even this harder than it needs to be.  You can create
multiple networks and attach machines to multiple networks.  Every point
so far has been 'why don't we use idea as a backdoor into our VM
without affecting the VM in any other way' - why can't that just be one
more network interface set aside for whatever management  instructions
are appropriate?  And then what needs pushing into Neutron is nothing
more complex than strong port firewalling to prevent the slaves/minions
talking to each other.  If you absolutely must make the communication
come from a system agent and go to a VM, then that can be done by
attaching the system agent to the administrative network - from within
the system agent, which is the thing that needs this, rather than within
Neutron, which doesn't really care how you use its networks.  I prefer
solutions where other tools don't have to make you a special case.


I've read through this email thread with quite a bit of curiosity, and I 
have to say what Ian says above makes a lot of sense to me. If Neutron 
can handle the creation of a management vNIC that has some associated 
iptables rules governing it that provides a level of security for guest 
- host and guest - $OpenStackService, then the transport problem 
domain is essentially solved, and Neutron can be happily ignorant (as it 
should be) of any guest agent communication with anything else.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
 On 12/10/2013 03:49 PM, Ian Wells wrote:
  On 10 December 2013 20:55, Clint Byrum cl...@fewbar.com
  mailto:cl...@fewbar.com wrote:
 
  If it is just a network API, it works the same for everybody. This
  makes it simpler, and thus easier to scale out independently of compute
  hosts. It is also something we already support and can very easily
  expand
  by just adding a tiny bit of functionality to neutron-metadata-agent.
 
  In fact we can even push routes via DHCP to send agent traffic through
  a different neutron-metadata-agent, so I don't see any issue where we
  are piling anything on top of an overstressed single resource. We can
  have neutron route this traffic directly to the Heat API which hosts it,
  and that can be load balanced and etc. etc. What is the exact scenario
  you're trying to avoid?
 
 
  You may be making even this harder than it needs to be.  You can create
  multiple networks and attach machines to multiple networks.  Every point
  so far has been 'why don't we use idea as a backdoor into our VM
  without affecting the VM in any other way' - why can't that just be one
  more network interface set aside for whatever management  instructions
  are appropriate?  And then what needs pushing into Neutron is nothing
  more complex than strong port firewalling to prevent the slaves/minions
  talking to each other.  If you absolutely must make the communication
  come from a system agent and go to a VM, then that can be done by
  attaching the system agent to the administrative network - from within
  the system agent, which is the thing that needs this, rather than within
  Neutron, which doesn't really care how you use its networks.  I prefer
  solutions where other tools don't have to make you a special case.
 
 I've read through this email thread with quite a bit of curiosity, and I 
 have to say what Ian says above makes a lot of sense to me. If Neutron 
 can handle the creation of a management vNIC that has some associated 
 iptables rules governing it that provides a level of security for guest 
 - host and guest - $OpenStackService, then the transport problem 
 domain is essentially solved, and Neutron can be happily ignorant (as it 
 should be) of any guest agent communication with anything else.
 

Indeed I think it could work, however I think the NIC is unnecessary.

Seems likely even with a second NIC that said address will be something
like 169.254.169.254 (or the ipv6 equivalent?).

If we want to attach that network as a second NIC instead of pushing a
route to it via DHCP, that is fine. But I don't think it actually gains
much, and the current neutron-metadata-agent already facilitates the
conversation between private guests and 169.254.169.254. We just need to
make sure we can forward more than port 80 through that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Ian Wells
On 12 December 2013 19:48, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
  On 12/10/2013 03:49 PM, Ian Wells wrote:
   On 10 December 2013 20:55, Clint Byrum cl...@fewbar.com
   mailto:cl...@fewbar.com wrote:
  I've read through this email thread with quite a bit of curiosity, and I
  have to say what Ian says above makes a lot of sense to me. If Neutron
  can handle the creation of a management vNIC that has some associated
  iptables rules governing it that provides a level of security for guest
  - host and guest - $OpenStackService, then the transport problem
  domain is essentially solved, and Neutron can be happily ignorant (as it
  should be) of any guest agent communication with anything else.
 

 Indeed I think it could work, however I think the NIC is unnecessary.

 Seems likely even with a second NIC that said address will be something
 like 169.254.169.254 (or the ipv6 equivalent?).


There *is* no ipv6 equivalent, which is one standing problem.  Another is
that (and admittedly you can quibble about this problem's significance) you
need a router on a network to be able to get to 169.254.169.254 - I raise
that because the obvious use case for multiple networks is to have a net
which is *not* attached to the outside world so that you can layer e.g. a
private DB service behind your app servers.

Neither of these are criticisms of your suggestion as much as they are
standing issues with the current architecture.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Vladik Romanovsky
Dmitry,

I understand that :)
The only hypervisor dependency it has is how it communicates with the host, 
while this can be extended and turned into a binding, so people could connect 
to it in multiple ways.

The real value, as I see it, is which features this guest agent already 
implements and the fact that this is a mature code base.

Thanks,
Vladik 

- Original Message -
 From: Dmitry Mescheryakov dmescherya...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, 12 December, 2013 12:27:47 PM
 Subject: Re: [openstack-dev] Unified Guest Agent proposal
 
 Vladik,
 
 Thanks for the suggestion, but hypervisor-dependent solution is exactly what
 scares off people in the thread :-)
 
 Thanks,
 
 Dmitry
 
 
 2013/12/11 Vladik Romanovsky  vladik.romanov...@enovance.com 
 
 
 
 Maybe it will be useful to use Ovirt guest agent as a base.
 
 http://www.ovirt.org/Guest_Agent
 https://github.com/oVirt/ovirt-guest-agent
 
 It is already working well on linux and windows and has a lot of
 functionality.
 However, currently it is using virtio-serial for communication, but I think
 it can be extended for other bindings.
 
 Vladik
 
 - Original Message -
  From: Clint Byrum  cl...@fewbar.com 
  To: openstack-dev  openstack-dev@lists.openstack.org 
  Sent: Tuesday, 10 December, 2013 4:02:41 PM
  Subject: Re: [openstack-dev] Unified Guest Agent proposal
  
  Excerpts from Dmitry Mescheryakov's message of 2013-12-10 12:37:37 -0800:
What is the exact scenario you're trying to avoid?
   
   It is DDoS attack on either transport (AMQP / ZeroMQ provider) or server
   (Salt / Our own self-written server). Looking at the design, it doesn't
   look like the attack could be somehow contained within a tenant it is
   coming from.
   
  
  We can push a tenant-specific route for the metadata server, and a tenant
  specific endpoint for in-agent things. Still simpler than hypervisor-aware
  guests. I haven't seen anybody ask for this yet, though I'm sure if they
  run into these problems it will be the next logical step.
  
   In the current OpenStack design I see only one similarly vulnerable
   component - metadata server. Keeping that in mind, maybe I just
   overestimate the threat?
   
  
  Anything you expose to the users is vulnerable. By using the localized
  hypervisor scheme you're now making the compute node itself vulnerable.
  Only now you're asking that an already complicated thing (nova-compute)
  add another job, rate limiting.
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Steven Dake

On 12/12/2013 10:24 AM, Dmitry Mescheryakov wrote:

Clint, Kevin,

Thanks for reassuring me :-) I just wanted to make sure that having 
direct access from VMs to a single facility is not a dead end in terms 
of security and extensibility. And since it is not, I agree it is much 
simpler (and hence better) than hypervisor-dependent design.



Then returning to two major suggestions made:
 * Salt
 * Custom solution specific to our needs

The custom solution could be made on top of oslo.messaging. That gives 
us RPC working on different messaging systems. And that is what we 
really need - an RPC into guest supporting various transports. What it 
lacks at the moment is security - it has neither authentication nor ACL.


Salt also provides RPC service, but it has a couple of disadvantages: 
it is tightly coupled with ZeroMQ and it needs a server process to 
run. A single transport option (ZeroMQ) is a limitation we really want 
to avoid. OpenStack could be deployed with various messaging 
providers, and we can't limit the choice to a single option in the 
guest agent. Though it could be changed in the future, it is an 
obstacle to consider.


Running yet another server process within OpenStack, as it was already 
pointed out, is expensive. It means another server to deploy and take 
care of, +1 to overall OpenStack complexity. And it does not look it 
could be fixed any time soon.


For given reasons I give favor to an agent based on oslo.messaging.



An agent based on oslo.messaging is a potential security attack vector 
and a possible scalability problem.  We do not want the guest agents 
communicating over the same RPC servers as the rest of OpenStack

Thanks,

Dmitry



2013/12/11 Fox, Kevin M kevin@pnnl.gov mailto:kevin@pnnl.gov

Yeah. Its likely that the metadata server stuff will get more
scalable/hardened over time. If it isn't enough now, lets fix it
rather then coming up with a new system to work around it.

I like the idea of using the network since all the hypervisors
have to support network drivers already. They also already have to
support talking to the metadata server. This keeps OpenStack out
of the hypervisor driver business.

Kevin


From: Clint Byrum [cl...@fewbar.com mailto:cl...@fewbar.com]
Sent: Tuesday, December 10, 2013 1:02 PM
To: openstack-dev
Subject: Re: [openstack-dev] Unified Guest Agent proposal

Excerpts from Dmitry Mescheryakov's message of 2013-12-10 12:37:37
-0800:
  What is the exact scenario you're trying to avoid?

 It is DDoS attack on either transport (AMQP / ZeroMQ provider)
or server
 (Salt / Our own self-written server). Looking at the design, it
doesn't
 look like the attack could be somehow contained within a tenant
it is
 coming from.


We can push a tenant-specific route for the metadata server, and a
tenant
specific endpoint for in-agent things. Still simpler than
hypervisor-aware
guests. I haven't seen anybody ask for this yet, though I'm sure
if they
run into these problems it will be the next logical step.

 In the current OpenStack design I see only one similarly vulnerable
 component - metadata server. Keeping that in mind, maybe I just
 overestimate the threat?


Anything you expose to the users is vulnerable. By using the
localized
hypervisor scheme you're now making the compute node itself
vulnerable.
Only now you're asking that an already complicated thing
(nova-compute)
add another job, rate limiting.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Clint Byrum
Excerpts from Steven Dake's message of 2013-12-12 12:32:55 -0800:
 On 12/12/2013 10:24 AM, Dmitry Mescheryakov wrote:
  Clint, Kevin,
 
  Thanks for reassuring me :-) I just wanted to make sure that having 
  direct access from VMs to a single facility is not a dead end in terms 
  of security and extensibility. And since it is not, I agree it is much 
  simpler (and hence better) than hypervisor-dependent design.
 
 
  Then returning to two major suggestions made:
   * Salt
   * Custom solution specific to our needs
 
  The custom solution could be made on top of oslo.messaging. That gives 
  us RPC working on different messaging systems. And that is what we 
  really need - an RPC into guest supporting various transports. What it 
  lacks at the moment is security - it has neither authentication nor ACL.
 
  Salt also provides RPC service, but it has a couple of disadvantages: 
  it is tightly coupled with ZeroMQ and it needs a server process to 
  run. A single transport option (ZeroMQ) is a limitation we really want 
  to avoid. OpenStack could be deployed with various messaging 
  providers, and we can't limit the choice to a single option in the 
  guest agent. Though it could be changed in the future, it is an 
  obstacle to consider.
 
  Running yet another server process within OpenStack, as it was already 
  pointed out, is expensive. It means another server to deploy and take 
  care of, +1 to overall OpenStack complexity. And it does not look it 
  could be fixed any time soon.
 
  For given reasons I give favor to an agent based on oslo.messaging.
 
 
 An agent based on oslo.messaging is a potential security attack vector 
 and a possible scalability problem.  We do not want the guest agents 
 communicating over the same RPC servers as the rest of OpenStack

I don't think we're talking about agents talking to the exact same
RabbitMQ/Qpid/etc. bus that things under the cloud are talking to. That
would definitely raise some eyebrows. No doubt it will be in the realm
of possibility if deployers decide to do that, but so is letting your
database server sit on the same flat network as your guests.

I have a hard time seeing how using the same library is a security
risk though.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-12 Thread Steven Dake

On 12/12/2013 02:19 PM, Clint Byrum wrote:

Excerpts from Steven Dake's message of 2013-12-12 12:32:55 -0800:

On 12/12/2013 10:24 AM, Dmitry Mescheryakov wrote:

Clint, Kevin,

Thanks for reassuring me :-) I just wanted to make sure that having
direct access from VMs to a single facility is not a dead end in terms
of security and extensibility. And since it is not, I agree it is much
simpler (and hence better) than hypervisor-dependent design.


Then returning to two major suggestions made:
  * Salt
  * Custom solution specific to our needs

The custom solution could be made on top of oslo.messaging. That gives
us RPC working on different messaging systems. And that is what we
really need - an RPC into guest supporting various transports. What it
lacks at the moment is security - it has neither authentication nor ACL.

Salt also provides RPC service, but it has a couple of disadvantages:
it is tightly coupled with ZeroMQ and it needs a server process to
run. A single transport option (ZeroMQ) is a limitation we really want
to avoid. OpenStack could be deployed with various messaging
providers, and we can't limit the choice to a single option in the
guest agent. Though it could be changed in the future, it is an
obstacle to consider.

Running yet another server process within OpenStack, as it was already
pointed out, is expensive. It means another server to deploy and take
care of, +1 to overall OpenStack complexity. And it does not look it
could be fixed any time soon.

For given reasons I give favor to an agent based on oslo.messaging.


An agent based on oslo.messaging is a potential security attack vector
and a possible scalability problem.  We do not want the guest agents
communicating over the same RPC servers as the rest of OpenStack

I don't think we're talking about agents talking to the exact same
RabbitMQ/Qpid/etc. bus that things under the cloud are talking to. That
would definitely raise some eyebrows. No doubt it will be in the realm
of possibility if deployers decide to do that, but so is letting your
database server sit on the same flat network as your guests.


This is my concern.


I have a hard time seeing how using the same library is a security
risk though.
Yes, unless the use of the library is abused by the deployer, is itself 
not a security risk.


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Dmitry Mescheryakov
Guys,

I see two major trends in the thread:

 * use Salt
 * write our own solution with architecture similar to Salt or MCollective

There were points raised pro and contra both solutions. But I have a
concern which I believe was not covered yet. Both solutions use either
ZeroMQ or message queues (AMQP/STOMP) as a transport. The thing is there is
going to be a shared facility between all the tenants. And unlike all other
OpenStack services, this facility will be directly accessible from VMs,
which leaves tenants very vulnerable to each other. Harm the facility from
your VM, and the whole Region/Cell/Availability Zone will be left out of
service.

Do you think that is solvable, or maybe I overestimate the threat?

Thanks,

Dmitry




2013/12/9 Dmitry Mescheryakov dmescherya...@mirantis.com




 2013/12/9 Kurt Griffiths kurt.griffi...@rackspace.com

  This list of features makes me *very* nervous from a security
 standpoint. Are we talking about giving an agent an arbitrary shell command
 or file to install, and it goes and does that, or are we simply triggering
 a preconfigured action (at the time the agent itself was installed)?


 I believe the agent must execute only a set of preconfigured actions
 exactly due to security reasons. It should be up to the using project
 (Savanna/Trove) to decide which actions must be exposed by the agent.



   From: Steven Dake sd...@redhat.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Monday, December 9, 2013 at 11:41 AM
 To: OpenStack Dev openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] Unified Guest Agent proposal

  In terms of features:
 * run shell commands
 * install files (with selinux properties as well)
 * create users and groups (with selinux properties as well)
 * install packages via yum, apt-get, rpm, pypi
 * start and enable system services for systemd or sysvinit
 * Install and unpack source tarballs
 * run scripts
 * Allow grouping, selection, and ordering of all of the above operations

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Dmitry Mescheryakov
And one more thing,

Sandy Walsh pointed to the client Rackspace developed and use - [1], [2].
Its design is somewhat different and can be expressed by the following
formulae:

App - Host (XenStore) - Guest Agent

(taken from the wiki [3])

It has an obvious disadvantage - it is hypervisor dependent and currently
implemented for Xen only. On the other hand such design should not have
shared facility vulnerability as Agent accesses the server not directly but
via XenStore (which AFAIU is compute node based).

Thanks,

Dmitry


[1] https://github.com/rackerlabs/openstack-guest-agents-unix
[2] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver
[3] https://wiki.openstack.org/wiki/GuestAgent


2013/12/10 Dmitry Mescheryakov dmescherya...@mirantis.com

 Guys,

 I see two major trends in the thread:

  * use Salt
  * write our own solution with architecture similar to Salt or MCollective

 There were points raised pro and contra both solutions. But I have a
 concern which I believe was not covered yet. Both solutions use either
 ZeroMQ or message queues (AMQP/STOMP) as a transport. The thing is there is
 going to be a shared facility between all the tenants. And unlike all other
 OpenStack services, this facility will be directly accessible from VMs,
 which leaves tenants very vulnerable to each other. Harm the facility from
 your VM, and the whole Region/Cell/Availability Zone will be left out of
 service.

 Do you think that is solvable, or maybe I overestimate the threat?

 Thanks,

 Dmitry




 2013/12/9 Dmitry Mescheryakov dmescherya...@mirantis.com




 2013/12/9 Kurt Griffiths kurt.griffi...@rackspace.com

  This list of features makes me *very* nervous from a security
 standpoint. Are we talking about giving an agent an arbitrary shell command
 or file to install, and it goes and does that, or are we simply triggering
 a preconfigured action (at the time the agent itself was installed)?


 I believe the agent must execute only a set of preconfigured actions
 exactly due to security reasons. It should be up to the using project
 (Savanna/Trove) to decide which actions must be exposed by the agent.



   From: Steven Dake sd...@redhat.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Monday, December 9, 2013 at 11:41 AM
 To: OpenStack Dev openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] Unified Guest Agent proposal

  In terms of features:
 * run shell commands
 * install files (with selinux properties as well)
 * create users and groups (with selinux properties as well)
 * install packages via yum, apt-get, rpm, pypi
 * start and enable system services for systemd or sysvinit
 * Install and unpack source tarballs
 * run scripts
 * Allow grouping, selection, and ordering of all of the above operations

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Clint Byrum
Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:15:15 -0800:
 Guys,
 
 I see two major trends in the thread:
 
  * use Salt
  * write our own solution with architecture similar to Salt or MCollective
 
 There were points raised pro and contra both solutions. But I have a
 concern which I believe was not covered yet. Both solutions use either
 ZeroMQ or message queues (AMQP/STOMP) as a transport. The thing is there is
 going to be a shared facility between all the tenants. And unlike all other
 OpenStack services, this facility will be directly accessible from VMs,
 which leaves tenants very vulnerable to each other. Harm the facility from
 your VM, and the whole Region/Cell/Availability Zone will be left out of
 service.
 
 Do you think that is solvable, or maybe I overestimate the threat?
 

I think Salt would be thrilled if we tested and improved its resiliency
to abuse. We're going to have to do that with whatever we expose to VMs.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Clint Byrum
Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:25:26 -0800:
 And one more thing,
 
 Sandy Walsh pointed to the client Rackspace developed and use - [1], [2].
 Its design is somewhat different and can be expressed by the following
 formulae:
 
 App - Host (XenStore) - Guest Agent
 
 (taken from the wiki [3])
 
 It has an obvious disadvantage - it is hypervisor dependent and currently
 implemented for Xen only. On the other hand such design should not have
 shared facility vulnerability as Agent accesses the server not directly but
 via XenStore (which AFAIU is compute node based).
 

I don't actually see any advantage to this approach. It seems to me that
it would be simpler to expose and manage a single network protocol than
it would be to expose hypervisor level communications for all hypervisors.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Dmitry Mescheryakov
2013/12/10 Clint Byrum cl...@fewbar.com

 Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:25:26 -0800:
  And one more thing,
 
  Sandy Walsh pointed to the client Rackspace developed and use - [1], [2].
  Its design is somewhat different and can be expressed by the following
  formulae:
 
  App - Host (XenStore) - Guest Agent
 
  (taken from the wiki [3])
 
  It has an obvious disadvantage - it is hypervisor dependent and currently
  implemented for Xen only. On the other hand such design should not have
  shared facility vulnerability as Agent accesses the server not directly
 but
  via XenStore (which AFAIU is compute node based).
 

 I don't actually see any advantage to this approach. It seems to me that
 it would be simpler to expose and manage a single network protocol than
 it would be to expose hypervisor level communications for all hypervisors.


I think the Rackspace agent design could be expanded as follows:

Controller (Savanna/Trove) - AMQP/ZeroMQ - Agent on Compute host -
XenStore - Guest Agent

That is somewhat speculative because if I understood it correctly the
opened code covers only the second part of exchange:

Python API / CMD interface - XenStore - Guest Agent

Assuming I got it right:
While more complex, such design removes pressure from AMQP/ZeroMQ
providers: on the 'Agent on Compute' you can easily control the amount of
messages emitted by Guest with throttling. It is easy since such agent runs
on a compute host. In the worst case, if it is happened to be abused by a
guest, it affect this compute host only and not the whole segment of
OpenStack.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Joe Gordon
On Dec 10, 2013 7:00 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:15:15 -0800:
  Guys,
 
  I see two major trends in the thread:
 
   * use Salt
   * write our own solution with architecture similar to Salt or
MCollective
 
  There were points raised pro and contra both solutions. But I have a
  concern which I believe was not covered yet. Both solutions use either
  ZeroMQ or message queues (AMQP/STOMP) as a transport. The thing is
there is
  going to be a shared facility between all the tenants. And unlike all
other
  OpenStack services, this facility will be directly accessible from VMs,
  which leaves tenants very vulnerable to each other. Harm the facility
from
  your VM, and the whole Region/Cell/Availability Zone will be left out of
  service.
 
  Do you think that is solvable, or maybe I overestimate the threat?
 

 I think Salt would be thrilled if we tested and improved its resiliency
 to abuse. We're going to have to do that with whatever we expose to VMs.

+1 to not reinventing the wheel, and using a friendly ecosystem tool that
we can improve as needed.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Clint Byrum
Excerpts from Dmitry Mescheryakov's message of 2013-12-10 11:08:58 -0800:
 2013/12/10 Clint Byrum cl...@fewbar.com
 
  Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:25:26 -0800:
   And one more thing,
  
   Sandy Walsh pointed to the client Rackspace developed and use - [1], [2].
   Its design is somewhat different and can be expressed by the following
   formulae:
  
   App - Host (XenStore) - Guest Agent
  
   (taken from the wiki [3])
  
   It has an obvious disadvantage - it is hypervisor dependent and currently
   implemented for Xen only. On the other hand such design should not have
   shared facility vulnerability as Agent accesses the server not directly
  but
   via XenStore (which AFAIU is compute node based).
  
 
  I don't actually see any advantage to this approach. It seems to me that
  it would be simpler to expose and manage a single network protocol than
  it would be to expose hypervisor level communications for all hypervisors.
 
 
 I think the Rackspace agent design could be expanded as follows:
 
 Controller (Savanna/Trove) - AMQP/ZeroMQ - Agent on Compute host -
 XenStore - Guest Agent
 
 That is somewhat speculative because if I understood it correctly the
 opened code covers only the second part of exchange:
 
 Python API / CMD interface - XenStore - Guest Agent
 
 Assuming I got it right:
 While more complex, such design removes pressure from AMQP/ZeroMQ
 providers: on the 'Agent on Compute' you can easily control the amount of
 messages emitted by Guest with throttling. It is easy since such agent runs
 on a compute host. In the worst case, if it is happened to be abused by a
 guest, it affect this compute host only and not the whole segment of
 OpenStack.
 

This still requires that we also write a backend to talk to the host
for all virt drivers. It also means that any OS we haven't written an
implementation for needs to be hypervisor-aware. That sounds like a
never ending battle.

If it is just a network API, it works the same for everybody. This
makes it simpler, and thus easier to scale out independently of compute
hosts. It is also something we already support and can very easily expand
by just adding a tiny bit of functionality to neutron-metadata-agent.

In fact we can even push routes via DHCP to send agent traffic through
a different neutron-metadata-agent, so I don't see any issue where we
are piling anything on top of an overstressed single resource. We can
have neutron route this traffic directly to the Heat API which hosts it,
and that can be load balanced and etc. etc. What is the exact scenario
you're trying to avoid?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Dmitry Mescheryakov
 What is the exact scenario you're trying to avoid?

It is DDoS attack on either transport (AMQP / ZeroMQ provider) or server
(Salt / Our own self-written server). Looking at the design, it doesn't
look like the attack could be somehow contained within a tenant it is
coming from.

In the current OpenStack design I see only one similarly vulnerable
component - metadata server. Keeping that in mind, maybe I just
overestimate the threat?


2013/12/10 Clint Byrum cl...@fewbar.com

 Excerpts from Dmitry Mescheryakov's message of 2013-12-10 11:08:58 -0800:
  2013/12/10 Clint Byrum cl...@fewbar.com
 
   Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:25:26
 -0800:
And one more thing,
   
Sandy Walsh pointed to the client Rackspace developed and use - [1],
 [2].
Its design is somewhat different and can be expressed by the
 following
formulae:
   
App - Host (XenStore) - Guest Agent
   
(taken from the wiki [3])
   
It has an obvious disadvantage - it is hypervisor dependent and
 currently
implemented for Xen only. On the other hand such design should not
 have
shared facility vulnerability as Agent accesses the server not
 directly
   but
via XenStore (which AFAIU is compute node based).
   
  
   I don't actually see any advantage to this approach. It seems to me
 that
   it would be simpler to expose and manage a single network protocol than
   it would be to expose hypervisor level communications for all
 hypervisors.
  
 
  I think the Rackspace agent design could be expanded as follows:
 
  Controller (Savanna/Trove) - AMQP/ZeroMQ - Agent on Compute host -
  XenStore - Guest Agent
 
  That is somewhat speculative because if I understood it correctly the
  opened code covers only the second part of exchange:
 
  Python API / CMD interface - XenStore - Guest Agent
 
  Assuming I got it right:
  While more complex, such design removes pressure from AMQP/ZeroMQ
  providers: on the 'Agent on Compute' you can easily control the amount of
  messages emitted by Guest with throttling. It is easy since such agent
 runs
  on a compute host. In the worst case, if it is happened to be abused by a
  guest, it affect this compute host only and not the whole segment of
  OpenStack.
 

 This still requires that we also write a backend to talk to the host
 for all virt drivers. It also means that any OS we haven't written an
 implementation for needs to be hypervisor-aware. That sounds like a
 never ending battle.

 If it is just a network API, it works the same for everybody. This
 makes it simpler, and thus easier to scale out independently of compute
 hosts. It is also something we already support and can very easily expand
 by just adding a tiny bit of functionality to neutron-metadata-agent.

 In fact we can even push routes via DHCP to send agent traffic through
 a different neutron-metadata-agent, so I don't see any issue where we
 are piling anything on top of an overstressed single resource. We can
 have neutron route this traffic directly to the Heat API which hosts it,
 and that can be load balanced and etc. etc. What is the exact scenario
 you're trying to avoid?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Ian Wells
On 10 December 2013 20:55, Clint Byrum cl...@fewbar.com wrote:

 If it is just a network API, it works the same for everybody. This
 makes it simpler, and thus easier to scale out independently of compute
 hosts. It is also something we already support and can very easily expand
 by just adding a tiny bit of functionality to neutron-metadata-agent.

 In fact we can even push routes via DHCP to send agent traffic through
 a different neutron-metadata-agent, so I don't see any issue where we
 are piling anything on top of an overstressed single resource. We can
 have neutron route this traffic directly to the Heat API which hosts it,
 and that can be load balanced and etc. etc. What is the exact scenario
 you're trying to avoid?


You may be making even this harder than it needs to be.  You can create
multiple networks and attach machines to multiple networks.  Every point so
far has been 'why don't we use idea as a backdoor into our VM without
affecting the VM in any other way' - why can't that just be one more
network interface set aside for whatever management  instructions are
appropriate?  And then what needs pushing into Neutron is nothing more
complex than strong port firewalling to prevent the slaves/minions talking
to each other.  If you absolutely must make the communication come from a
system agent and go to a VM, then that can be done by attaching the system
agent to the administrative network - from within the system agent, which
is the thing that needs this, rather than within Neutron, which doesn't
really care how you use its networks.  I prefer solutions where other tools
don't have to make you a special case.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Clint Byrum
Excerpts from Dmitry Mescheryakov's message of 2013-12-10 12:37:37 -0800:
  What is the exact scenario you're trying to avoid?
 
 It is DDoS attack on either transport (AMQP / ZeroMQ provider) or server
 (Salt / Our own self-written server). Looking at the design, it doesn't
 look like the attack could be somehow contained within a tenant it is
 coming from.
 

We can push a tenant-specific route for the metadata server, and a tenant
specific endpoint for in-agent things. Still simpler than hypervisor-aware
guests. I haven't seen anybody ask for this yet, though I'm sure if they
run into these problems it will be the next logical step.

 In the current OpenStack design I see only one similarly vulnerable
 component - metadata server. Keeping that in mind, maybe I just
 overestimate the threat?
 

Anything you expose to the users is vulnerable. By using the localized
hypervisor scheme you're now making the compute node itself vulnerable.
Only now you're asking that an already complicated thing (nova-compute)
add another job, rate limiting.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Vladik Romanovsky

Maybe it will be useful to use Ovirt guest agent as a base.

http://www.ovirt.org/Guest_Agent
https://github.com/oVirt/ovirt-guest-agent

It is already working well on linux and windows and has a lot of functionality.
However, currently it is using virtio-serial for communication, but I think it 
can be extended for other bindings.

Vladik

- Original Message -
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Sent: Tuesday, 10 December, 2013 4:02:41 PM
 Subject: Re: [openstack-dev] Unified Guest Agent proposal
 
 Excerpts from Dmitry Mescheryakov's message of 2013-12-10 12:37:37 -0800:
   What is the exact scenario you're trying to avoid?
  
  It is DDoS attack on either transport (AMQP / ZeroMQ provider) or server
  (Salt / Our own self-written server). Looking at the design, it doesn't
  look like the attack could be somehow contained within a tenant it is
  coming from.
  
 
 We can push a tenant-specific route for the metadata server, and a tenant
 specific endpoint for in-agent things. Still simpler than hypervisor-aware
 guests. I haven't seen anybody ask for this yet, though I'm sure if they
 run into these problems it will be the next logical step.
 
  In the current OpenStack design I see only one similarly vulnerable
  component - metadata server. Keeping that in mind, maybe I just
  overestimate the threat?
  
 
 Anything you expose to the users is vulnerable. By using the localized
 hypervisor scheme you're now making the compute node itself vulnerable.
 Only now you're asking that an already complicated thing (nova-compute)
 add another job, rate limiting.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-10 Thread Fox, Kevin M
Yeah. Its likely that the metadata server stuff will get more scalable/hardened 
over time. If it isn't enough now, lets fix it rather then coming up with a new 
system to work around it.

I like the idea of using the network since all the hypervisors have to support 
network drivers already. They also already have to support talking to the 
metadata server. This keeps OpenStack out of the hypervisor driver business.

Kevin


From: Clint Byrum [cl...@fewbar.com]
Sent: Tuesday, December 10, 2013 1:02 PM
To: openstack-dev
Subject: Re: [openstack-dev] Unified Guest Agent proposal

Excerpts from Dmitry Mescheryakov's message of 2013-12-10 12:37:37 -0800:
  What is the exact scenario you're trying to avoid?

 It is DDoS attack on either transport (AMQP / ZeroMQ provider) or server
 (Salt / Our own self-written server). Looking at the design, it doesn't
 look like the attack could be somehow contained within a tenant it is
 coming from.


We can push a tenant-specific route for the metadata server, and a tenant
specific endpoint for in-agent things. Still simpler than hypervisor-aware
guests. I haven't seen anybody ask for this yet, though I'm sure if they
run into these problems it will be the next logical step.

 In the current OpenStack design I see only one similarly vulnerable
 component - metadata server. Keeping that in mind, maybe I just
 overestimate the threat?


Anything you expose to the users is vulnerable. By using the localized
hypervisor scheme you're now making the compute node itself vulnerable.
Only now you're asking that an already complicated thing (nova-compute)
add another job, rate limiting.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread David Boucha
On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com wrote:



 On 12/08/2013 07:36 AM, Robert Collins wrote:
  On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com wrote:
 
 
  I suggested salt because we could very easily make trove and savana into
  salt masters (if we wanted to) just by having them import salt library
  and run an api call. When they spin up nodes using heat, we could easily
  have that to the cert exchange - and the admins of the site need not
  know _anything_ about salt, puppet or chef - only about trove or savana.
 
  Are salt masters multi-master / HA safe?
 
  E.g. if I've deployed 5 savanna API servers to handle load, and they
  all do this 'just import', does that work?
 
  If not, and we have to have one special one, what happens when it
  fails / is redeployed?

 Yes. You can have multiple salt masters.

  Can salt minions affect each other? Could one pretend to be a master,
  or snoop requests/responses to another minion?

 Yes and no. By default no - and this is protected by key encryption and
 whatnot. They can affect each other if you choose to explicitly grant
 them the ability to. That is - you can give a minion an acl to allow it
 inject specific command requests back up into the master. We use this in
 the infra systems to let a jenkins slave send a signal to our salt
 system to trigger a puppet run. That's all that slave can do though -
 send the signal that the puppet run needs to happen.

 However - I don't think we'd really want to use that in this case, so I
 think they answer you're looking for is no.

  Is salt limited: is it possible to assert that we *cannot* run
  arbitrary code over salt?

 In as much as it is possible to assert that about any piece of software
 (bugs, of course, blah blah) But the messages that salt sends to a
 minion are run this thing that you have a local definition for rather
 than here, have some python and run it

 Monty



Salt was originally designed to be a unified agent for a system like
openstack. In fact, many people use it for this purpose right now.

I discussed this with our team management and this is something SaltStack
wants to support.

Are there any specifics things that the salt minion lacks right now to
support this use case?

-- 
Dave Boucha  |  Sr. Engineer

Join us at SaltConf, Jan. 28-30, 2014 in Salt Lake City. www.saltconf.com


5272 South College Drive, Suite 301 | Murray, UT 84123
*office* 801-305-3563
d...@saltstack.com | www.saltstack.com http://saltstack.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Steven Dake

On 12/09/2013 09:41 AM, David Boucha wrote:
On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com 
mailto:mord...@inaugust.com wrote:




On 12/08/2013 07:36 AM, Robert Collins wrote:
 On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com
mailto:mord...@inaugust.com wrote:


 I suggested salt because we could very easily make trove and
savana into
 salt masters (if we wanted to) just by having them import salt
library
 and run an api call. When they spin up nodes using heat, we
could easily
 have that to the cert exchange - and the admins of the site
need not
 know _anything_ about salt, puppet or chef - only about trove
or savana.

 Are salt masters multi-master / HA safe?

 E.g. if I've deployed 5 savanna API servers to handle load, and they
 all do this 'just import', does that work?

 If not, and we have to have one special one, what happens when it
 fails / is redeployed?

Yes. You can have multiple salt masters.

 Can salt minions affect each other? Could one pretend to be a
master,
 or snoop requests/responses to another minion?

Yes and no. By default no - and this is protected by key
encryption and
whatnot. They can affect each other if you choose to explicitly grant
them the ability to. That is - you can give a minion an acl to
allow it
inject specific command requests back up into the master. We use
this in
the infra systems to let a jenkins slave send a signal to our salt
system to trigger a puppet run. That's all that slave can do though -
send the signal that the puppet run needs to happen.

However - I don't think we'd really want to use that in this case,
so I
think they answer you're looking for is no.

 Is salt limited: is it possible to assert that we *cannot* run
 arbitrary code over salt?

In as much as it is possible to assert that about any piece of
software
(bugs, of course, blah blah) But the messages that salt sends to a
minion are run this thing that you have a local definition for
rather
than here, have some python and run it

Monty



Salt was originally designed to be a unified agent for a system like 
openstack. In fact, many people use it for this purpose right now.


I discussed this with our team management and this is something 
SaltStack wants to support.


Are there any specifics things that the salt minion lacks right now to 
support this use case?




David,

If I am correct of my parsing of the salt nomenclature, Salt provides a 
Master (eg a server) and minions (eg agents that connect to the salt 
server).  The salt server tells the minions what to do.


This is not desirable for a unified agent (atleast in the case of Heat).

The bar is very very very high for introducing new *mandatory* *server* 
dependencies into OpenStack.  Requiring a salt master (or a puppet 
master, etc) in my view is a non-starter for a unified guest agent 
proposal.  Now if a heat user wants to use puppet, and can provide a 
puppet master in their cloud environment, that is fine, as long as it is 
optional.


A guest agent should have the following properties:
* minimal library dependency chain
* no third-party server dependencies
* packaged in relevant cloudy distributions

In terms of features:
* run shell commands
* install files (with selinux properties as well)
* create users and groups (with selinux properties as well)
* install packages via yum, apt-get, rpm, pypi
* start and enable system services for systemd or sysvinit
* Install and unpack source tarballs
* run scripts
* Allow grouping, selection, and ordering of all of the above operations

Agents are a huge pain to maintain and package.  It took a huge amount 
of willpower to get cloud-init standardized across the various 
distributions.  We have managed to get heat-cfntools (the heat agent) 
into every distribution at this point and this was a significant amount 
of work.  We don't want to keep repeating this process for each 
OpenStack project!


Regards,
-steve



--
Dave Boucha  |  Sr. Engineer

Join us at SaltConf, Jan. 28-30, 2014 in Salt Lake City. 
www.saltconf.com http://www.saltconf.com/



5272 South College Drive, Suite 301 | Murray, UT 84123
*office*801-305-3563
d...@saltstack.com mailto:d...@saltstack.com | www.saltstack.com 
http://saltstack.com/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Clint Byrum
Excerpts from Steven Dake's message of 2013-12-09 09:41:06 -0800:
 On 12/09/2013 09:41 AM, David Boucha wrote:
  On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com 
  mailto:mord...@inaugust.com wrote:
 
 
 
  On 12/08/2013 07:36 AM, Robert Collins wrote:
   On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com
  mailto:mord...@inaugust.com wrote:
  
  
   I suggested salt because we could very easily make trove and
  savana into
   salt masters (if we wanted to) just by having them import salt
  library
   and run an api call. When they spin up nodes using heat, we
  could easily
   have that to the cert exchange - and the admins of the site
  need not
   know _anything_ about salt, puppet or chef - only about trove
  or savana.
  
   Are salt masters multi-master / HA safe?
  
   E.g. if I've deployed 5 savanna API servers to handle load, and they
   all do this 'just import', does that work?
  
   If not, and we have to have one special one, what happens when it
   fails / is redeployed?
 
  Yes. You can have multiple salt masters.
 
   Can salt minions affect each other? Could one pretend to be a
  master,
   or snoop requests/responses to another minion?
 
  Yes and no. By default no - and this is protected by key
  encryption and
  whatnot. They can affect each other if you choose to explicitly grant
  them the ability to. That is - you can give a minion an acl to
  allow it
  inject specific command requests back up into the master. We use
  this in
  the infra systems to let a jenkins slave send a signal to our salt
  system to trigger a puppet run. That's all that slave can do though -
  send the signal that the puppet run needs to happen.
 
  However - I don't think we'd really want to use that in this case,
  so I
  think they answer you're looking for is no.
 
   Is salt limited: is it possible to assert that we *cannot* run
   arbitrary code over salt?
 
  In as much as it is possible to assert that about any piece of
  software
  (bugs, of course, blah blah) But the messages that salt sends to a
  minion are run this thing that you have a local definition for
  rather
  than here, have some python and run it
 
  Monty
 
 
 
  Salt was originally designed to be a unified agent for a system like 
  openstack. In fact, many people use it for this purpose right now.
 
  I discussed this with our team management and this is something 
  SaltStack wants to support.
 
  Are there any specifics things that the salt minion lacks right now to 
  support this use case?
 
 
 David,
 
 If I am correct of my parsing of the salt nomenclature, Salt provides a 
 Master (eg a server) and minions (eg agents that connect to the salt 
 server).  The salt server tells the minions what to do.
 
 This is not desirable for a unified agent (atleast in the case of Heat).
 
 The bar is very very very high for introducing new *mandatory* *server* 
 dependencies into OpenStack.  Requiring a salt master (or a puppet 
 master, etc) in my view is a non-starter for a unified guest agent 
 proposal.  Now if a heat user wants to use puppet, and can provide a 
 puppet master in their cloud environment, that is fine, as long as it is 
 optional.
 

What if we taught Heat to speak salt-master-ese? AFAIK it is basically
an RPC system. I think right now it is 0mq, so it would be relatively
straight forward to just have Heat start talking to the agents in 0mq.

 A guest agent should have the following properties:
 * minimal library dependency chain
 * no third-party server dependencies
 * packaged in relevant cloudy distributions
 

That last one only matters if the distributions won't add things like
agents to their images post-release. I am pretty sure work well in
OpenStack is important for server distributions and thus this is at
least something we don't have to freak out about too much.

 In terms of features:
 * run shell commands
 * install files (with selinux properties as well)
 * create users and groups (with selinux properties as well)
 * install packages via yum, apt-get, rpm, pypi
 * start and enable system services for systemd or sysvinit
 * Install and unpack source tarballs
 * run scripts
 * Allow grouping, selection, and ordering of all of the above operations
 

All of those things are general purpose low level system configuration
features. None of them will be needed for Trove or Savanna. They need
to do higher level things like run a Hadoop job or create a MySQL user.

 Agents are a huge pain to maintain and package.  It took a huge amount 
 of willpower to get cloud-init standardized across the various 
 distributions.  We have managed to get heat-cfntools (the heat agent) 
 into every distribution at this point and this was a significant amount 
 of work.  We don't want to keep repeating this 

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Pitucha, Stanislaw Izaak
 If I am correct of my parsing of the salt nomenclature, Salt provides a
Master (eg a server) and minions (eg agents that connect to the salt
server).  The salt server tells the minions what to do.

Almost - salt can use master, but it can also use the local filesystem (or
other providers of data). For the basic scenarios, salt master behaves
almost like a file server of the state files that describe what to do. When
using only local filesystem, you can run without a master.

 In terms of features:

Not sure about the properties (it is pretty minimal in terms of dependencies
on ubuntu/debian at least), but it can do most of the things from the
feature list. Selinux labels are missing
(https://github.com/saltstack/salt/issues/1349) and unpacking/installing
source tarballs doesn't go that well with declarative descriptions style IMO
(but is definitely possible). 

Regards,
Stanisław Pitucha
Cloud Services 
Hewlett Packard



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Kurt Griffiths
This list of features makes me very nervous from a security standpoint. Are we 
talking about giving an agent an arbitrary shell command or file to install, 
and it goes and does that, or are we simply triggering a preconfigured action 
(at the time the agent itself was installed)?

From: Steven Dake sd...@redhat.commailto:sd...@redhat.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, December 9, 2013 at 11:41 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Unified Guest Agent proposal

In terms of features:
* run shell commands
* install files (with selinux properties as well)
* create users and groups (with selinux properties as well)
* install packages via yum, apt-get, rpm, pypi
* start and enable system services for systemd or sysvinit
* Install and unpack source tarballs
* run scripts
* Allow grouping, selection, and ordering of all of the above operations
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread David Boucha
On Mon, Dec 9, 2013 at 10:41 AM, Steven Dake sd...@redhat.com wrote:

  On 12/09/2013 09:41 AM, David Boucha wrote:

  On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.comwrote:



 On 12/08/2013 07:36 AM, Robert Collins wrote:
  On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com wrote:
 
 
  I suggested salt because we could very easily make trove and savana
 into
  salt masters (if we wanted to) just by having them import salt library
  and run an api call. When they spin up nodes using heat, we could
 easily
  have that to the cert exchange - and the admins of the site need not
  know _anything_ about salt, puppet or chef - only about trove or
 savana.
 
  Are salt masters multi-master / HA safe?
 
  E.g. if I've deployed 5 savanna API servers to handle load, and they
  all do this 'just import', does that work?
 
  If not, and we have to have one special one, what happens when it
  fails / is redeployed?

  Yes. You can have multiple salt masters.

  Can salt minions affect each other? Could one pretend to be a master,
  or snoop requests/responses to another minion?

  Yes and no. By default no - and this is protected by key encryption and
 whatnot. They can affect each other if you choose to explicitly grant
 them the ability to. That is - you can give a minion an acl to allow it
 inject specific command requests back up into the master. We use this in
 the infra systems to let a jenkins slave send a signal to our salt
 system to trigger a puppet run. That's all that slave can do though -
 send the signal that the puppet run needs to happen.

 However - I don't think we'd really want to use that in this case, so I
 think they answer you're looking for is no.

  Is salt limited: is it possible to assert that we *cannot* run
  arbitrary code over salt?

  In as much as it is possible to assert that about any piece of software
 (bugs, of course, blah blah) But the messages that salt sends to a
 minion are run this thing that you have a local definition for rather
 than here, have some python and run it

 Monty



  Salt was originally designed to be a unified agent for a system like
 openstack. In fact, many people use it for this purpose right now.

  I discussed this with our team management and this is something
 SaltStack wants to support.

  Are there any specifics things that the salt minion lacks right now to
 support this use case?


 David,

 If I am correct of my parsing of the salt nomenclature, Salt provides a
 Master (eg a server) and minions (eg agents that connect to the salt
 server).  The salt server tells the minions what to do.


That is the default setup.  The salt-minion can also run in standalone mode
without a master.


 This is not desirable for a unified agent (atleast in the case of Heat).

 The bar is very very very high for introducing new *mandatory* *server*
 dependencies into OpenStack.  Requiring a salt master (or a puppet master,
 etc) in my view is a non-starter for a unified guest agent proposal.  Now
 if a heat user wants to use puppet, and can provide a puppet master in
 their cloud environment, that is fine, as long as it is optional.

 A guest agent should have the following properties:
 * minimal library dependency chain


Salt only has a few dependencies

 * no third-party server dependencies


As mentioned above, the salt-minion can run without a salt master in
standalone mode

 * packaged in relevant cloudy distributions


The Salt Minion is packaged for all major (and many smaller) distributions.
RHEL/EPEL/Debian/Ubuntu/Gentoo/FreeBSD/Arch/MacOS
There is also a Windows installer.


 In terms of features:
 * run shell commands
 * install files (with selinux properties as well)
 * create users and groups (with selinux properties as well)
 * install packages via yum, apt-get, rpm, pypi
 * start and enable system services for systemd or sysvinit
 * Install and unpack source tarballs
 * run scripts
 * Allow grouping, selection, and ordering of all of the above operations


Salt-Minion excels at all the above



 Agents are a huge pain to maintain and package.  It took a huge amount of
 willpower to get cloud-init standardized across the various distributions.
 We have managed to get heat-cfntools (the heat agent) into every
 distribution at this point and this was a significant amount of work.  We
 don't want to keep repeating this process for each OpenStack project!


I agree. It's a lot of work. The SaltStack organization has already done
the work to package for all these distributions and maintains the packages.



 Regards,
 -steve




Regards,

Dave
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread David Boucha
On Mon, Dec 9, 2013 at 11:19 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Steven Dake's message of 2013-12-09 09:41:06 -0800:
  On 12/09/2013 09:41 AM, David Boucha wrote:
   On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com
   mailto:mord...@inaugust.com wrote:
  
  
  
   On 12/08/2013 07:36 AM, Robert Collins wrote:
On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com
   mailto:mord...@inaugust.com wrote:
   
   
I suggested salt because we could very easily make trove and
   savana into
salt masters (if we wanted to) just by having them import salt
   library
and run an api call. When they spin up nodes using heat, we
   could easily
have that to the cert exchange - and the admins of the site
   need not
know _anything_ about salt, puppet or chef - only about trove
   or savana.
   
Are salt masters multi-master / HA safe?
   
E.g. if I've deployed 5 savanna API servers to handle load, and
 they
all do this 'just import', does that work?
   
If not, and we have to have one special one, what happens when it
fails / is redeployed?
  
   Yes. You can have multiple salt masters.
  
Can salt minions affect each other? Could one pretend to be a
   master,
or snoop requests/responses to another minion?
  
   Yes and no. By default no - and this is protected by key
   encryption and
   whatnot. They can affect each other if you choose to explicitly
 grant
   them the ability to. That is - you can give a minion an acl to
   allow it
   inject specific command requests back up into the master. We use
   this in
   the infra systems to let a jenkins slave send a signal to our salt
   system to trigger a puppet run. That's all that slave can do
 though -
   send the signal that the puppet run needs to happen.
  
   However - I don't think we'd really want to use that in this case,
   so I
   think they answer you're looking for is no.
  
Is salt limited: is it possible to assert that we *cannot* run
arbitrary code over salt?
  
   In as much as it is possible to assert that about any piece of
   software
   (bugs, of course, blah blah) But the messages that salt sends to a
   minion are run this thing that you have a local definition for
   rather
   than here, have some python and run it
  
   Monty
  
  
  
   Salt was originally designed to be a unified agent for a system like
   openstack. In fact, many people use it for this purpose right now.
  
   I discussed this with our team management and this is something
   SaltStack wants to support.
  
   Are there any specifics things that the salt minion lacks right now to
   support this use case?
  
 
  David,
 
  If I am correct of my parsing of the salt nomenclature, Salt provides a
  Master (eg a server) and minions (eg agents that connect to the salt
  server).  The salt server tells the minions what to do.
 
  This is not desirable for a unified agent (atleast in the case of Heat).
 
  The bar is very very very high for introducing new *mandatory* *server*
  dependencies into OpenStack.  Requiring a salt master (or a puppet
  master, etc) in my view is a non-starter for a unified guest agent
  proposal.  Now if a heat user wants to use puppet, and can provide a
  puppet master in their cloud environment, that is fine, as long as it is
  optional.
 

 What if we taught Heat to speak salt-master-ese? AFAIK it is basically
 an RPC system. I think right now it is 0mq, so it would be relatively
 straight forward to just have Heat start talking to the agents in 0mq.

  A guest agent should have the following properties:
  * minimal library dependency chain
  * no third-party server dependencies
  * packaged in relevant cloudy distributions
 

 That last one only matters if the distributions won't add things like
 agents to their images post-release. I am pretty sure work well in
 OpenStack is important for server distributions and thus this is at
 least something we don't have to freak out about too much.

  In terms of features:
  * run shell commands
  * install files (with selinux properties as well)
  * create users and groups (with selinux properties as well)
  * install packages via yum, apt-get, rpm, pypi
  * start and enable system services for systemd or sysvinit
  * Install and unpack source tarballs
  * run scripts
  * Allow grouping, selection, and ordering of all of the above operations
 

 All of those things are general purpose low level system configuration
 features. None of them will be needed for Trove or Savanna. They need
 to do higher level things like run a Hadoop job or create a MySQL user.

  Agents are a huge pain to maintain and package.  It took a huge amount
  of willpower to get cloud-init standardized across the various
  distributions.  We have 

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Dmitry Mescheryakov
2013/12/9 Clint Byrum cl...@fewbar.com

 Excerpts from Steven Dake's message of 2013-12-09 09:41:06 -0800:
  On 12/09/2013 09:41 AM, David Boucha wrote:
   On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com
   mailto:mord...@inaugust.com wrote:
  
  
  
   On 12/08/2013 07:36 AM, Robert Collins wrote:
On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com
   mailto:mord...@inaugust.com wrote:
   
   
I suggested salt because we could very easily make trove and
   savana into
salt masters (if we wanted to) just by having them import salt
   library
and run an api call. When they spin up nodes using heat, we
   could easily
have that to the cert exchange - and the admins of the site
   need not
know _anything_ about salt, puppet or chef - only about trove
   or savana.
   
Are salt masters multi-master / HA safe?
   
E.g. if I've deployed 5 savanna API servers to handle load, and
 they
all do this 'just import', does that work?
   
If not, and we have to have one special one, what happens when it
fails / is redeployed?
  
   Yes. You can have multiple salt masters.
  
Can salt minions affect each other? Could one pretend to be a
   master,
or snoop requests/responses to another minion?
  
   Yes and no. By default no - and this is protected by key
   encryption and
   whatnot. They can affect each other if you choose to explicitly
 grant
   them the ability to. That is - you can give a minion an acl to
   allow it
   inject specific command requests back up into the master. We use
   this in
   the infra systems to let a jenkins slave send a signal to our salt
   system to trigger a puppet run. That's all that slave can do
 though -
   send the signal that the puppet run needs to happen.
  
   However - I don't think we'd really want to use that in this case,
   so I
   think they answer you're looking for is no.
  
Is salt limited: is it possible to assert that we *cannot* run
arbitrary code over salt?
  
   In as much as it is possible to assert that about any piece of
   software
   (bugs, of course, blah blah) But the messages that salt sends to a
   minion are run this thing that you have a local definition for
   rather
   than here, have some python and run it
  
   Monty
  
  
  
   Salt was originally designed to be a unified agent for a system like
   openstack. In fact, many people use it for this purpose right now.
  
   I discussed this with our team management and this is something
   SaltStack wants to support.
  
   Are there any specifics things that the salt minion lacks right now to
   support this use case?
  
 
  David,
 
  If I am correct of my parsing of the salt nomenclature, Salt provides a
  Master (eg a server) and minions (eg agents that connect to the salt
  server).  The salt server tells the minions what to do.
 
  This is not desirable for a unified agent (atleast in the case of Heat).
 
  The bar is very very very high for introducing new *mandatory* *server*
  dependencies into OpenStack.  Requiring a salt master (or a puppet
  master, etc) in my view is a non-starter for a unified guest agent
  proposal.  Now if a heat user wants to use puppet, and can provide a
  puppet master in their cloud environment, that is fine, as long as it is
  optional.
 

 What if we taught Heat to speak salt-master-ese? AFAIK it is basically
 an RPC system. I think right now it is 0mq, so it would be relatively
 straight forward to just have Heat start talking to the agents in 0mq.

  A guest agent should have the following properties:
  * minimal library dependency chain
  * no third-party server dependencies
  * packaged in relevant cloudy distributions
 

 That last one only matters if the distributions won't add things like
 agents to their images post-release. I am pretty sure work well in
 OpenStack is important for server distributions and thus this is at
 least something we don't have to freak out about too much.

  In terms of features:
  * run shell commands
  * install files (with selinux properties as well)
  * create users and groups (with selinux properties as well)
  * install packages via yum, apt-get, rpm, pypi
  * start and enable system services for systemd or sysvinit
  * Install and unpack source tarballs
  * run scripts
  * Allow grouping, selection, and ordering of all of the above operations
 

 All of those things are general purpose low level system configuration
 features. None of them will be needed for Trove or Savanna. They need
 to do higher level things like run a Hadoop job or create a MySQL user.


I agree with Clint on this one, Savanna do needs high level domain-specific
operations. We can do anything having just a root shell. But security-wise,
as it was already mentioned in the 

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Dmitry Mescheryakov
2013/12/9 Kurt Griffiths kurt.griffi...@rackspace.com

  This list of features makes me *very* nervous from a security
 standpoint. Are we talking about giving an agent an arbitrary shell command
 or file to install, and it goes and does that, or are we simply triggering
 a preconfigured action (at the time the agent itself was installed)?


I believe the agent must execute only a set of preconfigured actions
exactly due to security reasons. It should be up to the using project
(Savanna/Trove) to decide which actions must be exposed by the agent.



   From: Steven Dake sd...@redhat.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Monday, December 9, 2013 at 11:41 AM
 To: OpenStack Dev openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] Unified Guest Agent proposal

  In terms of features:
 * run shell commands
 * install files (with selinux properties as well)
 * create users and groups (with selinux properties as well)
 * install packages via yum, apt-get, rpm, pypi
 * start and enable system services for systemd or sysvinit
 * Install and unpack source tarballs
 * run scripts
 * Allow grouping, selection, and ordering of all of the above operations

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-07 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2013-12-06 23:53:47 -0800:
 Agreed,
 
 Chatting earlier today on #cloud-init about all of this I think scott 
 convinced me that maybe we (the joint we in the community at large) should 
 think about asking ourselves what do we really want a guest agent for/to do?
 
 If it's for software installation or user management then aren't puppet, 
 chef, juju (lots of others) good enough?
 

Right those are system management agents.

 If it's for tracking what a vm is doing, aren't there many existing tools for 
 this already (sounds like monitoring to me).


Right general purpose system monitoring is a solved problem.

 Is there a good list of what people really want out of a guest agent 
 (something unique that only a guest agent can do/is best at). If there is one 
 and it was already posted, my fault (I am on my iPhone which is not best for 
 emails...)


So what is needed is domain specific command execution and segregation
of capabilities.

With a general purpose config tool like chef or puppet, doing MySQL
backups or adding MySQL users isn't really what they do.

One might say that this is more like what fabric or mcollective do. That
is definitely closer to what is desired. However there is still a
different desire that may not fit well with those tools.

Those tools are meant to give administrators rights to things on the
box. But what Trove wants is to give the trove agent the ability to
add a given MySQL user, but not the ability to, for instance, read the
records and pass them back to the trove service.

Likewise, Hadoop needs to run hadoop jobs, but not have full SSH to
the machine. While the _nova_ admin with root on compute nodes may
have the ability to just peek in on VMs, there is value in keeping the
Trove/Savanna/Heat admins segregated from that.

So basically there is a need for structure that general purpose tools
may not have. I admit though, it seems like that would be something
other tools outside of OpenStack would want as well.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-07 Thread Robert Collins
On 7 December 2013 21:08, Clint Byrum cl...@fewbar.com wrote:

 So what is needed is domain specific command execution and segregation
 of capabilities.

Sounds rather like mcollective.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-07 Thread Nicolas Barcet
On Sat, Dec 7, 2013 at 9:08 AM, Clint Byrum cl...@fewbar.com wrote:

 So what is needed is domain specific command execution and segregation
 of capabilities.


To further this, I know that a lot of security minded people consider this
types of agent sorts of backdoors. Having one generic backdoor that can
do everything is something that could be less acceptable as you would not
have the choice to pinpoint what you'd like to allow it to do, or then the
constraints in terms of fine grained access control becomes huge.   I did
not realize this until I too spoke with Scott about this.  Cloud-init, or
any such generic tool, should only enable deployment domain specific tool,
based on the specific needs of given use case, not become an agent
(backdoor) itself.

This said, I imagine we could get some benefits out of a generic
framework/library that could be used create such agents in a manner where
base authentication and access control is done properly.  This would allow
to simplify security analysis and impacts of agents developped using that
framework, but the framework itself should never become a generic binary
that is deploy everywhere by default and allow way too much in itself.
 Binary instances of agents written using the framework would be what could
be eventually deployed via cloud-init on a case by case basis.

Wdyt?

Nick


-- 
Nicolas Barcet nico...@barcet.com
a.k.a. nijaba, nick
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-07 Thread Clint Byrum
Excerpts from Robert Collins's message of 2013-12-07 00:17:19 -0800:
 On 7 December 2013 21:08, Clint Byrum cl...@fewbar.com wrote:
 
  So what is needed is domain specific command execution and segregation
  of capabilities.
 
 Sounds rather like mcollective.
 

It does actually. If it weren't explicitly tied to Ruby for its agent
plugins I'd say it is a good drop-in candidate. There is actually an
attempt to replace the ruby agent with C++ here:

https://github.com/jedi4ever/mcollective-cpp-agents

But it is 2+ years old so it is not clear at all if it was successful.

Anyway, even if we can't use mcollective, I think we can copy its model,
which is basically to have an AMQP (they use STOMP.. simpler than AMQP)
broker sit between clients (Trove/Savanna/Heat) and agents. Then on the
agents, you just have named plugins which can take inputs and produce
outputs.

So you'd put a 'mysql_db_crud' plugin on Trove managed instances, and a
'hadoop_run_job' plugin on Savanna instances. But the agent itself is
basically the same everywhere.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-07 Thread Clint Byrum
Excerpts from Nicolas Barcet's message of 2013-12-07 01:33:01 -0800:
 On Sat, Dec 7, 2013 at 9:08 AM, Clint Byrum cl...@fewbar.com wrote:
 
  So what is needed is domain specific command execution and segregation
  of capabilities.
 
 
 To further this, I know that a lot of security minded people consider this
 types of agent sorts of backdoors. Having one generic backdoor that can
 do everything is something that could be less acceptable as you would not
 have the choice to pinpoint what you'd like to allow it to do, or then the
 constraints in terms of fine grained access control becomes huge.   I did
 not realize this until I too spoke with Scott about this.  Cloud-init, or
 any such generic tool, should only enable deployment domain specific tool,
 based on the specific needs of given use case, not become an agent
 (backdoor) itself.
 

Right, we already have a backdoor agent on most OS's, it is called SSH
and we are used to being _very_ careful about granting SSH access.

 This said, I imagine we could get some benefits out of a generic
 framework/library that could be used create such agents in a manner where
 base authentication and access control is done properly.  This would allow
 to simplify security analysis and impacts of agents developped using that
 framework, but the framework itself should never become a generic binary
 that is deploy everywhere by default and allow way too much in itself.
  Binary instances of agents written using the framework would be what could
 be eventually deployed via cloud-init on a case by case basis.

I think the mcollective model (see previous message about it) has
undergone security review and is one to copy. It is mostly what you say.
The agent is only capable of doing what its plugins can do, and it only
needs to call out to a single broker, so poking holes for the agents to
get out is fairly straight forward.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-07 Thread Monty Taylor


On 12/07/2013 06:48 PM, Clint Byrum wrote:
 Excerpts from Nicolas Barcet's message of 2013-12-07 01:33:01 -0800:
 On Sat, Dec 7, 2013 at 9:08 AM, Clint Byrum cl...@fewbar.com wrote:

 So what is needed is domain specific command execution and segregation
 of capabilities.


 To further this, I know that a lot of security minded people consider this
 types of agent sorts of backdoors. Having one generic backdoor that can
 do everything is something that could be less acceptable as you would not
 have the choice to pinpoint what you'd like to allow it to do, or then the
 constraints in terms of fine grained access control becomes huge.   I did
 not realize this until I too spoke with Scott about this.  Cloud-init, or
 any such generic tool, should only enable deployment domain specific tool,
 based on the specific needs of given use case, not become an agent
 (backdoor) itself.

 
 Right, we already have a backdoor agent on most OS's, it is called SSH
 and we are used to being _very_ careful about granting SSH access.
 
 This said, I imagine we could get some benefits out of a generic
 framework/library that could be used create such agents in a manner where
 base authentication and access control is done properly.  This would allow
 to simplify security analysis and impacts of agents developped using that
 framework, but the framework itself should never become a generic binary
 that is deploy everywhere by default and allow way too much in itself.
  Binary instances of agents written using the framework would be what could
 be eventually deployed via cloud-init on a case by case basis.
 
 I think the mcollective model (see previous message about it) has
 undergone security review and is one to copy. It is mostly what you say.
 The agent is only capable of doing what its plugins can do, and it only
 needs to call out to a single broker, so poking holes for the agents to
 get out is fairly straight forward.

Sake of argument- salt's minion is very similar, and also has a plugin
and acl model - and at least for us doesn't have the ruby issue.

Of course, for _not_ us, it has the python issue. That said- it's
designed to respond to zeromq messages, so writing a salt minion and
plugins in c++ might not be hard to accomplish.

short/medium term - why don't we just actually make use of salt minion
for guest agents? It _is_ python based, which means sending it messages
from trove or savana shouldn't be hard to integrate. It would be
_fascinating_ to see if we could convince them to migrate from direct
zeromq to using it through oslo.messaging. They're also pretty friendly.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-07 Thread Joshua Harlow
Sounds like a pretty neat idea.

I like it!

Another idea: instead of saying pushing for one agent to rule them all why 
don't we come up with a desired reference spec (maybe including a reference 
implementation?) that the salt, chef, mcollective, (other...) can base theres 
off of. In a way this creates a standard agent that can also be compatible with 
other agents (as long as said agents implement the same spec). Said spec could 
be based off a combination of the salt one and the rackspace one (...) but 
instead of pushing a single agent, openstack would push a spec instead.

Sent from my really tiny device...

 On Dec 7, 2013, at 11:17 AM, Monty Taylor mord...@inaugust.com wrote:
 
 
 
 On 12/07/2013 06:48 PM, Clint Byrum wrote:
 Excerpts from Nicolas Barcet's message of 2013-12-07 01:33:01 -0800:
 On Sat, Dec 7, 2013 at 9:08 AM, Clint Byrum cl...@fewbar.com wrote:
 
 So what is needed is domain specific command execution and segregation
 of capabilities.
 
 To further this, I know that a lot of security minded people consider this
 types of agent sorts of backdoors. Having one generic backdoor that can
 do everything is something that could be less acceptable as you would not
 have the choice to pinpoint what you'd like to allow it to do, or then the
 constraints in terms of fine grained access control becomes huge.   I did
 not realize this until I too spoke with Scott about this.  Cloud-init, or
 any such generic tool, should only enable deployment domain specific tool,
 based on the specific needs of given use case, not become an agent
 (backdoor) itself.
 
 Right, we already have a backdoor agent on most OS's, it is called SSH
 and we are used to being _very_ careful about granting SSH access.
 
 This said, I imagine we could get some benefits out of a generic
 framework/library that could be used create such agents in a manner where
 base authentication and access control is done properly.  This would allow
 to simplify security analysis and impacts of agents developped using that
 framework, but the framework itself should never become a generic binary
 that is deploy everywhere by default and allow way too much in itself.
 Binary instances of agents written using the framework would be what could
 be eventually deployed via cloud-init on a case by case basis.
 
 I think the mcollective model (see previous message about it) has
 undergone security review and is one to copy. It is mostly what you say.
 The agent is only capable of doing what its plugins can do, and it only
 needs to call out to a single broker, so poking holes for the agents to
 get out is fairly straight forward.
 
 Sake of argument- salt's minion is very similar, and also has a plugin
 and acl model - and at least for us doesn't have the ruby issue.
 
 Of course, for _not_ us, it has the python issue. That said- it's
 designed to respond to zeromq messages, so writing a salt minion and
 plugins in c++ might not be hard to accomplish.
 
 short/medium term - why don't we just actually make use of salt minion
 for guest agents? It _is_ python based, which means sending it messages
 from trove or savana shouldn't be hard to integrate. It would be
 _fascinating_ to see if we could convince them to migrate from direct
 zeromq to using it through oslo.messaging. They're also pretty friendly.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-07 Thread Monty Taylor


On 12/07/2013 11:09 PM, Joshua Harlow wrote:
 Sounds like a pretty neat idea.
 
 I like it!
 
 Another idea: instead of saying pushing for one agent to rule them
 all why don't we come up with a desired reference spec (maybe
 including a reference implementation?) that the salt, chef,
 mcollective, (other...) can base theres off of. In a way this creates
 a standard agent that can also be compatible with other agents (as
 long as said agents implement the same spec). Said spec could be
 based off a combination of the salt one and the rackspace one (...)
 but instead of pushing a single agent, openstack would push a
 spec instead.

I think that's overreaching and unlikely to be helpful longterm
(although, in a perfect world...)

Let me be clear about what I'm suggesting - on a topic such as this, I
think it's very easy to quickly get confusing.

The problem isn't that the world needs a unified guest agent - we'll
NEVER get that to happen - it's that we need one for our services to use
to do guest agent things. It's not important that our guest agent is the
same at the guest agent systems that might be running at a given
location. For instance, if someone uses puppet and mcollective at a site
to deploy nova and trove, it's not important that the guest agent that
trove uses to perform tasks in guests it spawns be the same as the
orchestration system that the admins of that cloud use to perform config
management tasks on their servers. So - it's not important that their
personal deployment preferences be honored.

I suggested salt because we could very easily make trove and savana into
salt masters (if we wanted to) just by having them import salt library
and run an api call. When they spin up nodes using heat, we could easily
have that to the cert exchange - and the admins of the site need not
know _anything_ about salt, puppet or chef - only about trove or savana.


 Sent from my really tiny device...
 
 On Dec 7, 2013, at 11:17 AM, Monty Taylor mord...@inaugust.com
 wrote:
 
 
 
 On 12/07/2013 06:48 PM, Clint Byrum wrote: Excerpts from Nicolas
 Barcet's message of 2013-12-07 01:33:01 -0800:
 On Sat, Dec 7, 2013 at 9:08 AM, Clint Byrum
 cl...@fewbar.com wrote:
 
 So what is needed is domain specific command execution and
 segregation of capabilities.
 
 To further this, I know that a lot of security minded people
 consider this types of agent sorts of backdoors. Having one
 generic backdoor that can do everything is something that
 could be less acceptable as you would not have the choice to
 pinpoint what you'd like to allow it to do, or then the 
 constraints in terms of fine grained access control becomes
 huge.   I did not realize this until I too spoke with Scott
 about this.  Cloud-init, or any such generic tool, should only
 enable deployment domain specific tool, based on the specific
 needs of given use case, not become an agent (backdoor)
 itself.
 
 Right, we already have a backdoor agent on most OS's, it is
 called SSH and we are used to being _very_ careful about granting
 SSH access.
 
 This said, I imagine we could get some benefits out of a
 generic framework/library that could be used create such agents
 in a manner where base authentication and access control is
 done properly.  This would allow to simplify security analysis
 and impacts of agents developped using that framework, but the
 framework itself should never become a generic binary that is
 deploy everywhere by default and allow way too much in itself. 
 Binary instances of agents written using the framework would be
 what could be eventually deployed via cloud-init on a case by
 case basis.
 
 I think the mcollective model (see previous message about it)
 has undergone security review and is one to copy. It is mostly
 what you say. The agent is only capable of doing what its plugins
 can do, and it only needs to call out to a single broker, so
 poking holes for the agents to get out is fairly straight
 forward.
 
 Sake of argument- salt's minion is very similar, and also has a
 plugin and acl model - and at least for us doesn't have the ruby
 issue.
 
 Of course, for _not_ us, it has the python issue. That said- it's 
 designed to respond to zeromq messages, so writing a salt minion
 and plugins in c++ might not be hard to accomplish.
 
 short/medium term - why don't we just actually make use of salt
 minion for guest agents? It _is_ python based, which means sending
 it messages from trove or savana shouldn't be hard to integrate. It
 would be _fascinating_ to see if we could convince them to migrate
 from direct zeromq to using it through oslo.messaging. They're also
 pretty friendly.
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org 
 

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-07 Thread Robert Collins
On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com wrote:


 I suggested salt because we could very easily make trove and savana into
 salt masters (if we wanted to) just by having them import salt library
 and run an api call. When they spin up nodes using heat, we could easily
 have that to the cert exchange - and the admins of the site need not
 know _anything_ about salt, puppet or chef - only about trove or savana.

Are salt masters multi-master / HA safe?

E.g. if I've deployed 5 savanna API servers to handle load, and they
all do this 'just import', does that work?

If not, and we have to have one special one, what happens when it
fails / is redeployed?

Can salt minions affect each other? Could one pretend to be a master,
or snoop requests/responses to another minion?

Is salt limited: is it possible to assert that we *cannot* run
arbitrary code over salt?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Dmitry Mescheryakov
Hello all,

We would like to push further the discussion on unified guest agent. You
may find the details of our proposal at [1].

Also let me clarify why we started this conversation. Savanna currently
utilizes SSH to install/configure Hadoop on VMs. We were happy with that
approach until recently we realized that in many OpenStack deployments VMs
are not accessible from controller. That brought us to idea to use guest
agent for VM configuration instead. That approach is already used by Trove,
Murano and Heat and we can do the same.

Uniting the efforts on a single guest agent brings a couple advantages:
1. Code reuse across several projects.
2. Simplified deployment of OpenStack. Guest agent requires additional
facilities for transport like message queue or something similar. Sharing
agent means projects can share transport/config and hence ease life of
deployers.

We see it is a library and we think that Oslo is a good place for it.

Naturally, since this is going to be a _unified_ agent we seek input from
all interested parties.

[1] https://wiki.openstack.org/wiki/UnifiedGuestAgent

Thanks,

Dmitry
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Sandy Walsh


On 12/06/2013 03:45 PM, Dmitry Mescheryakov wrote:
 Hello all,
 
 We would like to push further the discussion on unified guest agent. You
 may find the details of our proposal at [1].
 
 Also let me clarify why we started this conversation. Savanna currently
 utilizes SSH to install/configure Hadoop on VMs. We were happy with that
 approach until recently we realized that in many OpenStack deployments
 VMs are not accessible from controller. That brought us to idea to use
 guest agent for VM configuration instead. That approach is already used
 by Trove, Murano and Heat and we can do the same.
 
 Uniting the efforts on a single guest agent brings a couple advantages:
 1. Code reuse across several projects.
 2. Simplified deployment of OpenStack. Guest agent requires additional
 facilities for transport like message queue or something similar.
 Sharing agent means projects can share transport/config and hence ease
 life of deployers.
 
 We see it is a library and we think that Oslo is a good place for it.
 
 Naturally, since this is going to be a _unified_ agent we seek input
 from all interested parties.

It might be worth while to consider building from the Rackspace guest
agents for linux [2] and windows [3]. Perhaps get them moved over to
stackforge and scrubbed?

These are geared towards Xen, but that would be a good first step in
making the HV-Guest pipe configurable.

[2] https://github.com/rackerlabs/openstack-guest-agents-unix
[3] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver

-S


 [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
 
 Thanks,
 
 Dmitry
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Sergey Lukjanov
In addition, here are several related links:

etherpad with some collected requirements:
https://etherpad.openstack.org/p/UnifiedAgents
initial thread about unified agents:
http://lists.openstack.org/pipermail/openstack-dev/2013-November/thread.html#18276

Thanks.


On Fri, Dec 6, 2013 at 11:45 PM, Dmitry Mescheryakov 
dmescherya...@mirantis.com wrote:

 Hello all,

 We would like to push further the discussion on unified guest agent. You
 may find the details of our proposal at [1].

 Also let me clarify why we started this conversation. Savanna currently
 utilizes SSH to install/configure Hadoop on VMs. We were happy with that
 approach until recently we realized that in many OpenStack deployments VMs
 are not accessible from controller. That brought us to idea to use guest
 agent for VM configuration instead. That approach is already used by Trove,
 Murano and Heat and we can do the same.

 Uniting the efforts on a single guest agent brings a couple advantages:
 1. Code reuse across several projects.
 2. Simplified deployment of OpenStack. Guest agent requires additional
 facilities for transport like message queue or something similar. Sharing
 agent means projects can share transport/config and hence ease life of
 deployers.

 We see it is a library and we think that Oslo is a good place for it.

 Naturally, since this is going to be a _unified_ agent we seek input from
 all interested parties.

 [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent

 Thanks,

 Dmitry

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Joshua Harlow
Another idea that I'll put up for consideration (since I work with the
cloud-init codebase also).

Cloud-init[1] which currently does lots of little useful initialization
types of activities (similar to the racker agents activities) has been
going through some of the same questions[2] as to should it be an agent
(or respond to some type of system signal on certain activities, like new
network metadata available). So this could be another way to go.

Including (ccing) scott who probably has more ideas around this to :-)

[1] https://launchpad.net/cloud-init
[2] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1153626

On 12/6/13 12:12 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:



On 12/06/2013 03:45 PM, Dmitry Mescheryakov wrote:
 Hello all,
 
 We would like to push further the discussion on unified guest agent. You
 may find the details of our proposal at [1].
 
 Also let me clarify why we started this conversation. Savanna currently
 utilizes SSH to install/configure Hadoop on VMs. We were happy with that
 approach until recently we realized that in many OpenStack deployments
 VMs are not accessible from controller. That brought us to idea to use
 guest agent for VM configuration instead. That approach is already used
 by Trove, Murano and Heat and we can do the same.
 
 Uniting the efforts on a single guest agent brings a couple advantages:
 1. Code reuse across several projects.
 2. Simplified deployment of OpenStack. Guest agent requires additional
 facilities for transport like message queue or something similar.
 Sharing agent means projects can share transport/config and hence ease
 life of deployers.
 
 We see it is a library and we think that Oslo is a good place for it.
 
 Naturally, since this is going to be a _unified_ agent we seek input
 from all interested parties.

It might be worth while to consider building from the Rackspace guest
agents for linux [2] and windows [3]. Perhaps get them moved over to
stackforge and scrubbed?

These are geared towards Xen, but that would be a good first step in
making the HV-Guest pipe configurable.

[2] https://github.com/rackerlabs/openstack-guest-agents-unix
[3] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver

-S


 [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
 
 Thanks,
 
 Dmitry
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Sergey Lukjanov
That's an interesting idea to use cloud-init, but it looks like such agent
will be unable to provide feedback like results of running commands.


On Sat, Dec 7, 2013 at 12:27 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

 Another idea that I'll put up for consideration (since I work with the
 cloud-init codebase also).

 Cloud-init[1] which currently does lots of little useful initialization
 types of activities (similar to the racker agents activities) has been
 going through some of the same questions[2] as to should it be an agent
 (or respond to some type of system signal on certain activities, like new
 network metadata available). So this could be another way to go.

 Including (ccing) scott who probably has more ideas around this to :-)

 [1] https://launchpad.net/cloud-init
 [2] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1153626

 On 12/6/13 12:12 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:

 
 
 On 12/06/2013 03:45 PM, Dmitry Mescheryakov wrote:
  Hello all,
 
  We would like to push further the discussion on unified guest agent. You
  may find the details of our proposal at [1].
 
  Also let me clarify why we started this conversation. Savanna currently
  utilizes SSH to install/configure Hadoop on VMs. We were happy with that
  approach until recently we realized that in many OpenStack deployments
  VMs are not accessible from controller. That brought us to idea to use
  guest agent for VM configuration instead. That approach is already used
  by Trove, Murano and Heat and we can do the same.
 
  Uniting the efforts on a single guest agent brings a couple advantages:
  1. Code reuse across several projects.
  2. Simplified deployment of OpenStack. Guest agent requires additional
  facilities for transport like message queue or something similar.
  Sharing agent means projects can share transport/config and hence ease
  life of deployers.
 
  We see it is a library and we think that Oslo is a good place for it.
 
  Naturally, since this is going to be a _unified_ agent we seek input
  from all interested parties.
 
 It might be worth while to consider building from the Rackspace guest
 agents for linux [2] and windows [3]. Perhaps get them moved over to
 stackforge and scrubbed?
 
 These are geared towards Xen, but that would be a good first step in
 making the HV-Guest pipe configurable.
 
 [2] https://github.com/rackerlabs/openstack-guest-agents-unix
 [3]
 https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver
 
 -S
 
 
  [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
 
  Thanks,
 
  Dmitry
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Fox, Kevin M
Another option is this:
https://github.com/cloudbase/cloudbase-init

It is python based on windows rather then .NET.

Thanks,
Kevin

From: Sandy Walsh [sandy.wa...@rackspace.com]
Sent: Friday, December 06, 2013 12:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On 12/06/2013 03:45 PM, Dmitry Mescheryakov wrote:
 Hello all,

 We would like to push further the discussion on unified guest agent. You
 may find the details of our proposal at [1].

 Also let me clarify why we started this conversation. Savanna currently
 utilizes SSH to install/configure Hadoop on VMs. We were happy with that
 approach until recently we realized that in many OpenStack deployments
 VMs are not accessible from controller. That brought us to idea to use
 guest agent for VM configuration instead. That approach is already used
 by Trove, Murano and Heat and we can do the same.

 Uniting the efforts on a single guest agent brings a couple advantages:
 1. Code reuse across several projects.
 2. Simplified deployment of OpenStack. Guest agent requires additional
 facilities for transport like message queue or something similar.
 Sharing agent means projects can share transport/config and hence ease
 life of deployers.

 We see it is a library and we think that Oslo is a good place for it.

 Naturally, since this is going to be a _unified_ agent we seek input
 from all interested parties.

It might be worth while to consider building from the Rackspace guest
agents for linux [2] and windows [3]. Perhaps get them moved over to
stackforge and scrubbed?

These are geared towards Xen, but that would be a good first step in
making the HV-Guest pipe configurable.

[2] https://github.com/rackerlabs/openstack-guest-agents-unix
[3] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver

-S


 [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent

 Thanks,

 Dmitry


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2013-12-06 12:27:10 -0800:
 Another idea that I'll put up for consideration (since I work with the
 cloud-init codebase also).
 
 Cloud-init[1] which currently does lots of little useful initialization
 types of activities (similar to the racker agents activities) has been
 going through some of the same questions[2] as to should it be an agent
 (or respond to some type of system signal on certain activities, like new
 network metadata available). So this could be another way to go.
 
 Including (ccing) scott who probably has more ideas around this to :-)
 
 [1] https://launchpad.net/cloud-init
 [2] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1153626
 

I have a theory that cloud-init is so popular and useful precisely
becuase it does _not_ expand into ongoing-management territory. It is
really amazing at setting the table for instances when you choose
not to do image based software deployment. Even if you do image based
deployment, it is really great for abstracting away all the cloud details
and customizing an instance.

The problem with conflating those two tasks is that early boot
configuration carries quite a different set of constraints and assumptions
when compared to what an agent will be tasked with.

I would be interested in seeing the cloud-config piece pulled out into
its own library. That syntax is pretty popular, and in fact was mostly
mimicked by the cloudformation tool 'cfn-init'. I suspect that they did
not just do this work in cloud-init because it is GPLv3, or because of the
Canonical CLA.. two things that scare off IP-hungry companies like Amazon.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Clint Byrum
Excerpts from Sandy Walsh's message of 2013-12-06 12:12:16 -0800:
 
 On 12/06/2013 03:45 PM, Dmitry Mescheryakov wrote:
  Hello all,
  
  We would like to push further the discussion on unified guest agent. You
  may find the details of our proposal at [1].
  
  Also let me clarify why we started this conversation. Savanna currently
  utilizes SSH to install/configure Hadoop on VMs. We were happy with that
  approach until recently we realized that in many OpenStack deployments
  VMs are not accessible from controller. That brought us to idea to use
  guest agent for VM configuration instead. That approach is already used
  by Trove, Murano and Heat and we can do the same.
  
  Uniting the efforts on a single guest agent brings a couple advantages:
  1. Code reuse across several projects.
  2. Simplified deployment of OpenStack. Guest agent requires additional
  facilities for transport like message queue or something similar.
  Sharing agent means projects can share transport/config and hence ease
  life of deployers.
  
  We see it is a library and we think that Oslo is a good place for it.
  
  Naturally, since this is going to be a _unified_ agent we seek input
  from all interested parties.
 
 It might be worth while to consider building from the Rackspace guest
 agents for linux [2] and windows [3]. Perhaps get them moved over to
 stackforge and scrubbed?
 
 These are geared towards Xen, but that would be a good first step in
 making the HV-Guest pipe configurable.
 
 [2] https://github.com/rackerlabs/openstack-guest-agents-unix
 [3] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver

This looks promising. The exchange plugin system would, I think, allow
for new exchanges that are not Xen specific to be created fairly easily.
The XenStore thing seems too low-level to use in the general case.
However, the program itself seems to have the right kind of flexible/light
weight structure.

I think we'll end up having to add some things to Neutron to allow
poking holes from VPC's into the cloud infrastructure safely. Once that
is done we can use any communication protocol that we can pass through
that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Joshua Harlow
Agreed,

Chatting earlier today on #cloud-init about all of this I think scott convinced 
me that maybe we (the joint we in the community at large) should think about 
asking ourselves what do we really want a guest agent for/to do?

If it's for software installation or user management then aren't puppet, chef, 
juju (lots of others) good enough?

If it's for tracking what a vm is doing, aren't there many existing tools for 
this already (sounds like monitoring to me).

Is there a good list of what people really want out of a guest agent (something 
unique that only a guest agent can do/is best at). If there is one and it was 
already posted, my fault (I am on my iPhone which is not best for emails...)

Sent from my really tiny device...

 On Dec 6, 2013, at 11:37 PM, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Joshua Harlow's message of 2013-12-06 12:27:10 -0800:
 Another idea that I'll put up for consideration (since I work with the
 cloud-init codebase also).
 
 Cloud-init[1] which currently does lots of little useful initialization
 types of activities (similar to the racker agents activities) has been
 going through some of the same questions[2] as to should it be an agent
 (or respond to some type of system signal on certain activities, like new
 network metadata available). So this could be another way to go.
 
 Including (ccing) scott who probably has more ideas around this to :-)
 
 [1] https://launchpad.net/cloud-init
 [2] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1153626
 
 I have a theory that cloud-init is so popular and useful precisely
 becuase it does _not_ expand into ongoing-management territory. It is
 really amazing at setting the table for instances when you choose
 not to do image based software deployment. Even if you do image based
 deployment, it is really great for abstracting away all the cloud details
 and customizing an instance.
 
 The problem with conflating those two tasks is that early boot
 configuration carries quite a different set of constraints and assumptions
 when compared to what an agent will be tasked with.
 
 I would be interested in seeing the cloud-config piece pulled out into
 its own library. That syntax is pretty popular, and in fact was mostly
 mimicked by the cloudformation tool 'cfn-init'. I suspect that they did
 not just do this work in cloud-init because it is GPLv3, or because of the
 Canonical CLA.. two things that scare off IP-hungry companies like Amazon.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev