> On Mar 21, 2016, at 9:43 PM, Adam Young <ayo...@redhat.com> wrote:
> 
> I had a good discussion with the Nova folks in IRC today.
> 
> My goal was to understand what could talk to what, and the short according to 
> dansmith
> 
> " any node in nova land has to be able to talk to the queue for any other one 
> for the most part: compute->compute, compute->conductor, conductor->compute, 
> api->everything. There might be a few exceptions, but not worth it, IMHO, in 
> the current architecture."
> 
> Longer conversation is here:
> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-03-21.log.html#t2016-03-21T17:54:27
> 
> Right now, the message queue is a nightmare.  All sorts of sensitive 
> information flows over the message queue: Tokens (including admin) are the 
> most obvious.  Every piece of audit data. All notifications and all control 
> messages.
> 
> Before we continue down the path of "anything can talk to anything" can we 
> please map out what needs to talk to what, and why?  Many of the use cases 
> seem to be based on something that should be kicked off by the conductor, 
> such as "migrate, resize, live-migrate" and it sounds like there are plans to 
> make that happen.
> 
> So, let's assume we can get to the point where, if node 1 needs to talk to 
> node 2, it will do so only via the conductor.  With that in place, we can put 
> an access control rule in place:

Shouldn't we be trying to remove central bottlenecks by decentralizing 
communications where we can?

Doug


> 
> 1.  Compute nodes can only read from the queue 
> compute.<name>-novacompute-<index>.localdomain
> 2.  Compute nodes can only write to response queues in the RPC vhost
> 3.  Compute nodes can only write to notification queus in the notification 
> host.
> 
> I know that with AMQP, we should be able to identify the writer of a message. 
>  This means that each compute node should have its own user.  I have 
> identified how to do that for Rabbit and QPid.  I assume for 0mq is would 
> make sense to use ZAP (http://rfc.zeromq.org/spec:27) but I'd rather the 0mq 
> maintainers chime in here.
> 
> I think it is safe (and sane) to have the same use on the compute node 
> communicate with  Neutron, Nova, and Ceilometer.  This will avoid a false 
> sense of security: if one is compromised, they are all going to be 
> compromised.  Plan accordingly.
> 
> Beyond that, we should have message broker users for each of the components 
> that is a client of the broker.
> 
> Applications that run on top of the cloud, and that do not get presence on 
> the compute nodes, should have their own VHost.  I see Sahara on my Tripleo 
> deploy, but I assume there are others.  Either they completely get their own 
> vhost, or the apps should share one separate from the RPC/Notification vhosts 
> we currently have.  Even Heat might fall into this category.
> 
> Note that those application users can be allowed to read from the 
> notification queues if necessary.  They just should not be using the same 
> vhost for their own traffic.
> 
> Please tell me if/where I am blindingly wrong in my analysis.
> 
> 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to