Re: [Openstack-operators] Murano in Production

2016-09-26 Thread Joe Topjian
Hi Serg,

We were indeed hitting that bug, but the cert wasn't self-signed. It was
easier for us to manually patch the Ubuntu Cloud package of Murano with the
stable/mitaka fix linked in that bug report than trying to debug where
OpenSSL/python/requests/etc was going awry.

We might redeploy Murano strictly using virtualenv's and pip so we stay on
the latest stable patches.

Thanks,
Joe

On Mon, Sep 26, 2016 at 11:03 PM, Serg Melikyan 
wrote:

> Hi Joe,
>
> >Also, is it safe to say that communication between agent/engine only, and
> will only, happen during app deployment?
>
> murano-agent & murano-engine keep active connection to the Rabbit MQ
> broker but message exchange happens only during deployment of the app.
>
> >One thing we just ran into, though, was getting the agent/engine rmq
> config to work with SSL
>
> We had related bug fixed in Newton, can you confirm that you are *not*
> hitting bug #1578421 [0]
>
> References:
> [0] https://bugs.launchpad.net/murano/+bug/1578421
>
>
>
>
> On Mon, Sep 26, 2016 at 1:43 PM, Andrew Woodward  wrote:
> > In Fuel we deploy haproxy to all of the nodes that are part of the
> > VIP/endpoint service (This is usually part of the controller role) Then
> the
> > vips (internal or public) can be active on any member of the group.
> > Corosync/Pacemaker is used to move the VIP address (as apposed to
> > keepalived) in our case both haproxy, and the vip live in a namespace and
> > haproxy is always running on all of these nodes bound to 0/0.
> >
> > In the case of murano-rabbit we take the same approach as we do for
> galera,
> > all of the members are listed in the balancer, but with the others as
> > backup's this makes them inactive until the first node is down. This
> allow
> > the vip to move to any of the proxies in the cluster, and continue to
> direct
> > traffic to the same node util that rabbit instance is also un-available
> >
> > isten mysqld
> >   bind 192.168.0.2:3306
> >   mode  tcp
> >   option  httpchk
> >   option  tcplog
> >   option  clitcpka
> >   option  srvtcpka
> >   stick on  dst
> >   stick-table  type ip size 1
> >   timeout client  28801s
> >   timeout server  28801s
> >   server node-1 192.168.0.4:3307  check port 49000 inter 20s fastinter
> 2s
> > downinter 2s rise 3 fall 3
> >   server node-3 192.168.0.6:3307 backup check port 49000 inter 20s
> fastinter
> > 2s downinter 2s rise 3 fall 3
> >   server node-4 192.168.0.5:3307 backup check port 49000 inter 20s
> fastinter
> > 2s downinter 2s rise 3 fall 3
> >
> > listen murano_rabbitmq
> >   bind 10.110.3.3:55572
> >   balance  roundrobin
> >   mode  tcp
> >   option  tcpka
> >   timeout client  48h
> >   timeout server  48h
> >   server node-1 192.168.0.4:55572  check inter 5000 rise 2 fall 3
> >   server node-3 192.168.0.6:55572 backup check inter 5000 rise 2 fall 3
> >   server node-4 192.168.0.5:55572 backup check inter 5000 rise 2 fall 3
> >
> >
> > On Fri, Sep 23, 2016 at 7:30 AM Mike Lowe  wrote:
> >>
> >> Would you mind sharing an example snippet from HA proxy config?  I had
> >> struggled in the past with getting this part to work.
> >>
> >>
> >> > On Sep 23, 2016, at 12:13 AM, Serg Melikyan 
> >> > wrote:
> >> >
> >> > Hi Joe,
> >> >
> >> > I can share some details on how murano is configured as part of the
> >> > default Mirantis OpenStack configuration and try to explain why it's
> >> > done in that way as it's done, I hope it helps you in your case.
> >> >
> >> > As part of Mirantis OpenStack second instance of the RabbitMQ is
> >> > getting deployed specially for the murano, but it's configuration is
> >> > different than for the RabbitMQ instance used by the other OpenStack
> >> > components.
> >> >
> >> > Why to use separate instance of the RabbitMQ?
> >> > 1. Prevent possibility to get access to the RabbitMQ supporting
> >> > whole cloud infrastructure by limiting access on the networking level
> >> > rather than rely on authentication/authorization
> >> > 2. Prevent possibility of DDoS by limiting access on the
> >> > networking level to the infrastructure RabbitMQ
> >> >
> >> > Given that second RabbitMQ instance is used only for the murano-agent
> >> > <-> murano-engine communications and murano-agent is running on the
> >> > VMs we had to make couple of changes in the deployment of the RabbitMQ
> >> > (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> >> > for m-agent <-> m-engine communications):
> >> >
> >> > 1. RabbitMQ is not clustered, just separate instance running on each
> >> > controller node
> >> > 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are
> >> > exposed
> >> > 3. It's has different port number than default
> >> > 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> >> > pointing to the RabbitMQ on the current primary controller
> >> >
> >> > Note: How murano-agent is working? Murano-engine creates queue 

Re: [Openstack-operators] Murano in Production

2016-09-26 Thread Serg Melikyan
Hi Joe,

>Also, is it safe to say that communication between agent/engine only, and will 
>only, happen during app deployment?

murano-agent & murano-engine keep active connection to the Rabbit MQ
broker but message exchange happens only during deployment of the app.

>One thing we just ran into, though, was getting the agent/engine rmq config to 
>work with SSL

We had related bug fixed in Newton, can you confirm that you are *not*
hitting bug #1578421 [0]

References:
[0] https://bugs.launchpad.net/murano/+bug/1578421




On Mon, Sep 26, 2016 at 1:43 PM, Andrew Woodward  wrote:
> In Fuel we deploy haproxy to all of the nodes that are part of the
> VIP/endpoint service (This is usually part of the controller role) Then the
> vips (internal or public) can be active on any member of the group.
> Corosync/Pacemaker is used to move the VIP address (as apposed to
> keepalived) in our case both haproxy, and the vip live in a namespace and
> haproxy is always running on all of these nodes bound to 0/0.
>
> In the case of murano-rabbit we take the same approach as we do for galera,
> all of the members are listed in the balancer, but with the others as
> backup's this makes them inactive until the first node is down. This allow
> the vip to move to any of the proxies in the cluster, and continue to direct
> traffic to the same node util that rabbit instance is also un-available
>
> isten mysqld
>   bind 192.168.0.2:3306
>   mode  tcp
>   option  httpchk
>   option  tcplog
>   option  clitcpka
>   option  srvtcpka
>   stick on  dst
>   stick-table  type ip size 1
>   timeout client  28801s
>   timeout server  28801s
>   server node-1 192.168.0.4:3307  check port 49000 inter 20s fastinter 2s
> downinter 2s rise 3 fall 3
>   server node-3 192.168.0.6:3307 backup check port 49000 inter 20s fastinter
> 2s downinter 2s rise 3 fall 3
>   server node-4 192.168.0.5:3307 backup check port 49000 inter 20s fastinter
> 2s downinter 2s rise 3 fall 3
>
> listen murano_rabbitmq
>   bind 10.110.3.3:55572
>   balance  roundrobin
>   mode  tcp
>   option  tcpka
>   timeout client  48h
>   timeout server  48h
>   server node-1 192.168.0.4:55572  check inter 5000 rise 2 fall 3
>   server node-3 192.168.0.6:55572 backup check inter 5000 rise 2 fall 3
>   server node-4 192.168.0.5:55572 backup check inter 5000 rise 2 fall 3
>
>
> On Fri, Sep 23, 2016 at 7:30 AM Mike Lowe  wrote:
>>
>> Would you mind sharing an example snippet from HA proxy config?  I had
>> struggled in the past with getting this part to work.
>>
>>
>> > On Sep 23, 2016, at 12:13 AM, Serg Melikyan 
>> > wrote:
>> >
>> > Hi Joe,
>> >
>> > I can share some details on how murano is configured as part of the
>> > default Mirantis OpenStack configuration and try to explain why it's
>> > done in that way as it's done, I hope it helps you in your case.
>> >
>> > As part of Mirantis OpenStack second instance of the RabbitMQ is
>> > getting deployed specially for the murano, but it's configuration is
>> > different than for the RabbitMQ instance used by the other OpenStack
>> > components.
>> >
>> > Why to use separate instance of the RabbitMQ?
>> > 1. Prevent possibility to get access to the RabbitMQ supporting
>> > whole cloud infrastructure by limiting access on the networking level
>> > rather than rely on authentication/authorization
>> > 2. Prevent possibility of DDoS by limiting access on the
>> > networking level to the infrastructure RabbitMQ
>> >
>> > Given that second RabbitMQ instance is used only for the murano-agent
>> > <-> murano-engine communications and murano-agent is running on the
>> > VMs we had to make couple of changes in the deployment of the RabbitMQ
>> > (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
>> > for m-agent <-> m-engine communications):
>> >
>> > 1. RabbitMQ is not clustered, just separate instance running on each
>> > controller node
>> > 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are
>> > exposed
>> > 3. It's has different port number than default
>> > 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
>> > pointing to the RabbitMQ on the current primary controller
>> >
>> > Note: How murano-agent is working? Murano-engine creates queue with
>> > uniq name and put configuration tasks to that queue which are later
>> > getting picked up by murano-agent when VM is booted and murano-agent
>> > is configured to use created queue through cloud-init.
>> >
>> > #1 Clustering
>> >
>> > * Given that per 1 app deployment from we create 1-N VMs and send 1-M
>> > configuration tasks, where in most of the cases N and M are less than
>> > 3.
>> > * Even if app deployment will be failed due to cluster failover it's
>> > can be always re-deployed by the user.
>> > * Controller-node failover most probably will lead to limited
>> > accessibility of the Heat, Nova & Neutron API and application
>> > deployment will fail 

Re: [Openstack-operators] Murano in Production

2016-09-23 Thread Joe Topjian
Hi Serg,

Thank you for sharing this information :)

If I'm understanding correctly, the main reason you're using a
non-clustered / corosync setup is because that's how most other components
in Mirantis OpenStack are configured? Is there anything to be aware of in
how Murano communicates over the agent/engine rmq in a clustered rmq setup?

Also, is it safe to say that communication between agent/engine only, and
will only, happen during app deployment? Meaning, if the rmq server goes
down (let's even say it goes away permanently for exaggeration), short of
some errors in the agent log, nothing else bad will come out of it?

With regard to a different port and a publicly accessible address, I agree
and we'll be deploying this same way.

One thing we just ran into, though, was getting the agent/engine rmq config
to work with SSL. For some reason the murano/openstack configuration (done
via oslo) had no problems recognizing our SSL cert, but the agent/engine
did not like it at all. The Ubuntu Cloud packages have not been updated for
a bit so we ended up patching for the "insecure" option both in engine and
agent templates (btw: very nice that the agent can be installed via
cloud-init -- I really didn't want to manage a second set of images just to
have the agent pre-installed).

Thank you again,
Joe

On Thu, Sep 22, 2016 at 10:13 PM, Serg Melikyan 
wrote:

> Hi Joe,
>
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack configuration and try to explain why it's
> done in that way as it's done, I hope it helps you in your case.
>
> As part of Mirantis OpenStack second instance of the RabbitMQ is
> getting deployed specially for the murano, but it's configuration is
> different than for the RabbitMQ instance used by the other OpenStack
> components.
>
> Why to use separate instance of the RabbitMQ?
>  1. Prevent possibility to get access to the RabbitMQ supporting
> whole cloud infrastructure by limiting access on the networking level
> rather than rely on authentication/authorization
>  2. Prevent possibility of DDoS by limiting access on the
> networking level to the infrastructure RabbitMQ
>
> Given that second RabbitMQ instance is used only for the murano-agent
> <-> murano-engine communications and murano-agent is running on the
> VMs we had to make couple of changes in the deployment of the RabbitMQ
> (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> for m-agent <-> m-engine communications):
>
> 1. RabbitMQ is not clustered, just separate instance running on each
> controller node
> 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are
> exposed
> 3. It's has different port number than default
> 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> pointing to the RabbitMQ on the current primary controller
>
> Note: How murano-agent is working? Murano-engine creates queue with
> uniq name and put configuration tasks to that queue which are later
> getting picked up by murano-agent when VM is booted and murano-agent
> is configured to use created queue through cloud-init.
>
> #1 Clustering
>
> * Given that per 1 app deployment from we create 1-N VMs and send 1-M
> configuration tasks, where in most of the cases N and M are less than
> 3.
> * Even if app deployment will be failed due to cluster failover it's
> can be always re-deployed by the user.
> * Controller-node failover most probably will lead to limited
> accessibility of the Heat, Nova & Neutron API and application
> deployment will fail regardless of the not executing configuration
> task on the VM.
>
> #2 Exposure on the Public VIP
>
> One of the reasons behind choosing RabbitMQ as transport for
> murano-agent communications was connectivity from the VM - it's much
> easier to implement connectivity *from* the VM than *to* VM.
>
> But even in the case when you are connecting to the broker from the VM
> you should have connectivity and public interface where all other
> OpenStack APIs are exposed is most natural way to do that.
>
> #3 Different from the default port number
>
> Just to avoid confusion from the RabbitMQ used for the infrastructure,
> even given that they are on the different networks.
>
> #4 HAProxy
>
> In case of the default Mirantis OpenStack configuration is used mostly
> to support non-clustered RabbitMQ setup and exposure on the Public
> VIP, but also helpful in case of more complicated setups.
>
> P.S. I hope my answers helped, let me know if I can cover something in
> more details.
> --
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-23 Thread Matt Fischer
Other that #1 that's exactly the same design we used for Trove. Glad to see
someone else using it too for validation. Thanks.

On Sep 22, 2016 11:39 PM, "Serg Melikyan"  wrote:

> Hi Joe,
>
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack configuration and try to explain why it's
> done in that way as it's done, I hope it helps you in your case.
>
> As part of Mirantis OpenStack second instance of the RabbitMQ is
> getting deployed specially for the murano, but it's configuration is
> different than for the RabbitMQ instance used by the other OpenStack
> components.
>
> Why to use separate instance of the RabbitMQ?
>  1. Prevent possibility to get access to the RabbitMQ supporting
> whole cloud infrastructure by limiting access on the networking level
> rather than rely on authentication/authorization
>  2. Prevent possibility of DDoS by limiting access on the
> networking level to the infrastructure RabbitMQ
>
> Given that second RabbitMQ instance is used only for the murano-agent
> <-> murano-engine communications and murano-agent is running on the
> VMs we had to make couple of changes in the deployment of the RabbitMQ
> (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> for m-agent <-> m-engine communications):
>
> 1. RabbitMQ is not clustered, just separate instance running on each
> controller node
> 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are
> exposed
> 3. It's has different port number than default
> 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> pointing to the RabbitMQ on the current primary controller
>
> Note: How murano-agent is working? Murano-engine creates queue with
> uniq name and put configuration tasks to that queue which are later
> getting picked up by murano-agent when VM is booted and murano-agent
> is configured to use created queue through cloud-init.
>
> #1 Clustering
>
> * Given that per 1 app deployment from we create 1-N VMs and send 1-M
> configuration tasks, where in most of the cases N and M are less than
> 3.
> * Even if app deployment will be failed due to cluster failover it's
> can be always re-deployed by the user.
> * Controller-node failover most probably will lead to limited
> accessibility of the Heat, Nova & Neutron API and application
> deployment will fail regardless of the not executing configuration
> task on the VM.
>
> #2 Exposure on the Public VIP
>
> One of the reasons behind choosing RabbitMQ as transport for
> murano-agent communications was connectivity from the VM - it's much
> easier to implement connectivity *from* the VM than *to* VM.
>
> But even in the case when you are connecting to the broker from the VM
> you should have connectivity and public interface where all other
> OpenStack APIs are exposed is most natural way to do that.
>
> #3 Different from the default port number
>
> Just to avoid confusion from the RabbitMQ used for the infrastructure,
> even given that they are on the different networks.
>
> #4 HAProxy
>
> In case of the default Mirantis OpenStack configuration is used mostly
> to support non-clustered RabbitMQ setup and exposure on the Public
> VIP, but also helpful in case of more complicated setups.
>
> P.S. I hope my answers helped, let me know if I can cover something in
> more details.
> --
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-23 Thread Serg Melikyan
Kris,

if I understand correctly we use pacemaker/corosync to manage our
cluster. When primary controller is detected as failed - pacemaker
updates HAProxy configuration to point to the new primary controller.

I don't know all details regarding HA Proxy and how HA is made in that
case, I've added Andrew who might share more details regarding that.
Andrew can you help here?


On Fri, Sep 23, 2016 at 12:11 AM, Kris G. Lindgren
 wrote:
> How are you having ha proxy pointing to the current primary controller?  Is 
> this done automatically or are you manually setting a server as the master?
>
> Sent from my iPad
>
>> On Sep 23, 2016, at 5:17 AM, Serg Melikyan  wrote:
>>
>> Hi Joe,
>>
>> I can share some details on how murano is configured as part of the
>> default Mirantis OpenStack configuration and try to explain why it's
>> done in that way as it's done, I hope it helps you in your case.
>>
>> As part of Mirantis OpenStack second instance of the RabbitMQ is
>> getting deployed specially for the murano, but it's configuration is
>> different than for the RabbitMQ instance used by the other OpenStack
>> components.
>>
>> Why to use separate instance of the RabbitMQ?
>> 1. Prevent possibility to get access to the RabbitMQ supporting
>> whole cloud infrastructure by limiting access on the networking level
>> rather than rely on authentication/authorization
>> 2. Prevent possibility of DDoS by limiting access on the
>> networking level to the infrastructure RabbitMQ
>>
>> Given that second RabbitMQ instance is used only for the murano-agent
>> <-> murano-engine communications and murano-agent is running on the
>> VMs we had to make couple of changes in the deployment of the RabbitMQ
>> (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
>> for m-agent <-> m-engine communications):
>>
>> 1. RabbitMQ is not clustered, just separate instance running on each
>> controller node
>> 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are exposed
>> 3. It's has different port number than default
>> 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
>> pointing to the RabbitMQ on the current primary controller
>>
>> Note: How murano-agent is working? Murano-engine creates queue with
>> uniq name and put configuration tasks to that queue which are later
>> getting picked up by murano-agent when VM is booted and murano-agent
>> is configured to use created queue through cloud-init.
>>
>> #1 Clustering
>>
>> * Given that per 1 app deployment from we create 1-N VMs and send 1-M
>> configuration tasks, where in most of the cases N and M are less than
>> 3.
>> * Even if app deployment will be failed due to cluster failover it's
>> can be always re-deployed by the user.
>> * Controller-node failover most probably will lead to limited
>> accessibility of the Heat, Nova & Neutron API and application
>> deployment will fail regardless of the not executing configuration
>> task on the VM.
>>
>> #2 Exposure on the Public VIP
>>
>> One of the reasons behind choosing RabbitMQ as transport for
>> murano-agent communications was connectivity from the VM - it's much
>> easier to implement connectivity *from* the VM than *to* VM.
>>
>> But even in the case when you are connecting to the broker from the VM
>> you should have connectivity and public interface where all other
>> OpenStack APIs are exposed is most natural way to do that.
>>
>> #3 Different from the default port number
>>
>> Just to avoid confusion from the RabbitMQ used for the infrastructure,
>> even given that they are on the different networks.
>>
>> #4 HAProxy
>>
>> In case of the default Mirantis OpenStack configuration is used mostly
>> to support non-clustered RabbitMQ setup and exposure on the Public
>> VIP, but also helpful in case of more complicated setups.
>>
>> P.S. I hope my answers helped, let me know if I can cover something in
>> more details.
>> --
>> Serg Melikyan, Development Manager at Mirantis, Inc.
>> http://mirantis.com | smelik...@mirantis.com
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-23 Thread Kris G. Lindgren
How are you having ha proxy pointing to the current primary controller?  Is 
this done automatically or are you manually setting a server as the master?

Sent from my iPad

> On Sep 23, 2016, at 5:17 AM, Serg Melikyan  wrote:
> 
> Hi Joe,
> 
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack configuration and try to explain why it's
> done in that way as it's done, I hope it helps you in your case.
> 
> As part of Mirantis OpenStack second instance of the RabbitMQ is
> getting deployed specially for the murano, but it's configuration is
> different than for the RabbitMQ instance used by the other OpenStack
> components.
> 
> Why to use separate instance of the RabbitMQ?
> 1. Prevent possibility to get access to the RabbitMQ supporting
> whole cloud infrastructure by limiting access on the networking level
> rather than rely on authentication/authorization
> 2. Prevent possibility of DDoS by limiting access on the
> networking level to the infrastructure RabbitMQ
> 
> Given that second RabbitMQ instance is used only for the murano-agent
> <-> murano-engine communications and murano-agent is running on the
> VMs we had to make couple of changes in the deployment of the RabbitMQ
> (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> for m-agent <-> m-engine communications):
> 
> 1. RabbitMQ is not clustered, just separate instance running on each
> controller node
> 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are exposed
> 3. It's has different port number than default
> 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> pointing to the RabbitMQ on the current primary controller
> 
> Note: How murano-agent is working? Murano-engine creates queue with
> uniq name and put configuration tasks to that queue which are later
> getting picked up by murano-agent when VM is booted and murano-agent
> is configured to use created queue through cloud-init.
> 
> #1 Clustering
> 
> * Given that per 1 app deployment from we create 1-N VMs and send 1-M
> configuration tasks, where in most of the cases N and M are less than
> 3.
> * Even if app deployment will be failed due to cluster failover it's
> can be always re-deployed by the user.
> * Controller-node failover most probably will lead to limited
> accessibility of the Heat, Nova & Neutron API and application
> deployment will fail regardless of the not executing configuration
> task on the VM.
> 
> #2 Exposure on the Public VIP
> 
> One of the reasons behind choosing RabbitMQ as transport for
> murano-agent communications was connectivity from the VM - it's much
> easier to implement connectivity *from* the VM than *to* VM.
> 
> But even in the case when you are connecting to the broker from the VM
> you should have connectivity and public interface where all other
> OpenStack APIs are exposed is most natural way to do that.
> 
> #3 Different from the default port number
> 
> Just to avoid confusion from the RabbitMQ used for the infrastructure,
> even given that they are on the different networks.
> 
> #4 HAProxy
> 
> In case of the default Mirantis OpenStack configuration is used mostly
> to support non-clustered RabbitMQ setup and exposure on the Public
> VIP, but also helpful in case of more complicated setups.
> 
> P.S. I hope my answers helped, let me know if I can cover something in
> more details.
> -- 
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Murano in Production

2016-09-22 Thread Serg Melikyan
Hi Joe,

I can share some details on how murano is configured as part of the
default Mirantis OpenStack configuration and try to explain why it's
done in that way as it's done, I hope it helps you in your case.

As part of Mirantis OpenStack second instance of the RabbitMQ is
getting deployed specially for the murano, but it's configuration is
different than for the RabbitMQ instance used by the other OpenStack
components.

Why to use separate instance of the RabbitMQ?
 1. Prevent possibility to get access to the RabbitMQ supporting
whole cloud infrastructure by limiting access on the networking level
rather than rely on authentication/authorization
 2. Prevent possibility of DDoS by limiting access on the
networking level to the infrastructure RabbitMQ

Given that second RabbitMQ instance is used only for the murano-agent
<-> murano-engine communications and murano-agent is running on the
VMs we had to make couple of changes in the deployment of the RabbitMQ
(bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
for m-agent <-> m-engine communications):

1. RabbitMQ is not clustered, just separate instance running on each
controller node
2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are exposed
3. It's has different port number than default
4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
pointing to the RabbitMQ on the current primary controller

Note: How murano-agent is working? Murano-engine creates queue with
uniq name and put configuration tasks to that queue which are later
getting picked up by murano-agent when VM is booted and murano-agent
is configured to use created queue through cloud-init.

#1 Clustering

* Given that per 1 app deployment from we create 1-N VMs and send 1-M
configuration tasks, where in most of the cases N and M are less than
3.
* Even if app deployment will be failed due to cluster failover it's
can be always re-deployed by the user.
* Controller-node failover most probably will lead to limited
accessibility of the Heat, Nova & Neutron API and application
deployment will fail regardless of the not executing configuration
task on the VM.

#2 Exposure on the Public VIP

One of the reasons behind choosing RabbitMQ as transport for
murano-agent communications was connectivity from the VM - it's much
easier to implement connectivity *from* the VM than *to* VM.

But even in the case when you are connecting to the broker from the VM
you should have connectivity and public interface where all other
OpenStack APIs are exposed is most natural way to do that.

#3 Different from the default port number

Just to avoid confusion from the RabbitMQ used for the infrastructure,
even given that they are on the different networks.

#4 HAProxy

In case of the default Mirantis OpenStack configuration is used mostly
to support non-clustered RabbitMQ setup and exposure on the Public
VIP, but also helpful in case of more complicated setups.

P.S. I hope my answers helped, let me know if I can cover something in
more details.
-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-18 Thread Joe Topjian
Good call.

I think Matt bringing up Trove is worthwhile, too. If we were to consider
deploying Trove in the future, and now that I've learned it also has an
agent/rabbit setup, there's definitely more weight behind a second
agent-only Rabbit cluster.

On Sun, Sep 18, 2016 at 9:15 PM, Sam Morrison  wrote:

> You could also use https://www.rabbitmq.com/maxlength.html to mitigate
> overflowing on the trove vhost side.
>
>
> Sam
>
>
> On 19 Sep 2016, at 1:07 PM, Joe Topjian  wrote:
>
> Thanks for everyone's input. I think I'm going to go with a single Rabbit
> cluster and separate by vhosts. Our environment is nowhere as large as
> NeCTAR or TWC, so I can definitely understand concern about Rabbit blowing
> the cloud up. We can be a little bit more flexible.
>
> As a precaution, though, I'm going to route everything through a new
> HAProxy frontend. At first, it'll just point to the same Rabbit cluster,
> but if we need to create a separate cluster, we'll swap the backend out.
> That should enable existing Murano agents to continue working.
>
> If this crashes and burns on us, I'll be more than happy to report
> failure. :)
>
> On Sun, Sep 18, 2016 at 7:38 PM, Silence Dogood 
> wrote:
>
>> I'd love to see your results on this .  Very interesting stuff.
>>
>> On Sep 17, 2016 1:37 AM, "Joe Topjian"  wrote:
>>
>>> Hi all,
>>>
>>> We're planning to deploy Murano to one of our OpenStack clouds and I'm
>>> debating the RabbitMQ setup.
>>>
>>> For background: the Murano agent that runs on instances requires access
>>> to RabbitMQ. Murano is able to be configured with two RabbitMQ services:
>>> one for traditional OpenStack communication and one for the Murano/Agent
>>> communication.
>>>
>>> From a security/segregation point of view, would vhost separation on our
>>> existing RabbitMQ cluster be sufficient? Or is it recommended to have an
>>> entirely separate cluster?
>>>
>>> As you can imagine, I'd like to avoid having to manage *two* RabbitMQ
>>> clusters. :)
>>>
>>> Thanks,
>>> Joe
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-18 Thread Joe Topjian
Thanks for everyone's input. I think I'm going to go with a single Rabbit
cluster and separate by vhosts. Our environment is nowhere as large as
NeCTAR or TWC, so I can definitely understand concern about Rabbit blowing
the cloud up. We can be a little bit more flexible.

As a precaution, though, I'm going to route everything through a new
HAProxy frontend. At first, it'll just point to the same Rabbit cluster,
but if we need to create a separate cluster, we'll swap the backend out.
That should enable existing Murano agents to continue working.

If this crashes and burns on us, I'll be more than happy to report failure.
:)

On Sun, Sep 18, 2016 at 7:38 PM, Silence Dogood 
wrote:

> I'd love to see your results on this .  Very interesting stuff.
>
> On Sep 17, 2016 1:37 AM, "Joe Topjian"  wrote:
>
>> Hi all,
>>
>> We're planning to deploy Murano to one of our OpenStack clouds and I'm
>> debating the RabbitMQ setup.
>>
>> For background: the Murano agent that runs on instances requires access
>> to RabbitMQ. Murano is able to be configured with two RabbitMQ services:
>> one for traditional OpenStack communication and one for the Murano/Agent
>> communication.
>>
>> From a security/segregation point of view, would vhost separation on our
>> existing RabbitMQ cluster be sufficient? Or is it recommended to have an
>> entirely separate cluster?
>>
>> As you can imagine, I'd like to avoid having to manage *two* RabbitMQ
>> clusters. :)
>>
>> Thanks,
>> Joe
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-18 Thread Silence Dogood
I'd love to see your results on this .  Very interesting stuff.

On Sep 17, 2016 1:37 AM, "Joe Topjian"  wrote:

> Hi all,
>
> We're planning to deploy Murano to one of our OpenStack clouds and I'm
> debating the RabbitMQ setup.
>
> For background: the Murano agent that runs on instances requires access to
> RabbitMQ. Murano is able to be configured with two RabbitMQ services: one
> for traditional OpenStack communication and one for the Murano/Agent
> communication.
>
> From a security/segregation point of view, would vhost separation on our
> existing RabbitMQ cluster be sufficient? Or is it recommended to have an
> entirely separate cluster?
>
> As you can imagine, I'd like to avoid having to manage *two* RabbitMQ
> clusters. :)
>
> Thanks,
> Joe
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-18 Thread Matt Fischer
+1 This was our concern also with Trove. If a tenant DoSes Trove we
probably don't all get fired. The rest of rabbit is just too important to
risk sharing.

On Sun, Sep 18, 2016 at 6:53 PM, Sam Morrison  wrote:

> We run completely separate clusters. I’m sure vhosts give you acceptable
> security but it means also sharing disk and ram which means if something
> went awry and generated lots of messages etc. it could take your whole
> rabbit cluster down.
>
> Sam
>
>
> > On 17 Sep 2016, at 3:34 PM, Joe Topjian  wrote:
> >
> > Hi all,
> >
> > We're planning to deploy Murano to one of our OpenStack clouds and I'm
> debating the RabbitMQ setup.
> >
> > For background: the Murano agent that runs on instances requires access
> to RabbitMQ. Murano is able to be configured with two RabbitMQ services:
> one for traditional OpenStack communication and one for the Murano/Agent
> communication.
> >
> > From a security/segregation point of view, would vhost separation on our
> existing RabbitMQ cluster be sufficient? Or is it recommended to have an
> entirely separate cluster?
> >
> > As you can imagine, I'd like to avoid having to manage *two* RabbitMQ
> clusters. :)
> >
> > Thanks,
> > Joe
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-18 Thread Sam Morrison
We run completely separate clusters. I’m sure vhosts give you acceptable 
security but it means also sharing disk and ram which means if something went 
awry and generated lots of messages etc. it could take your whole rabbit 
cluster down.

Sam


> On 17 Sep 2016, at 3:34 PM, Joe Topjian  wrote:
> 
> Hi all,
> 
> We're planning to deploy Murano to one of our OpenStack clouds and I'm 
> debating the RabbitMQ setup.
> 
> For background: the Murano agent that runs on instances requires access to 
> RabbitMQ. Murano is able to be configured with two RabbitMQ services: one for 
> traditional OpenStack communication and one for the Murano/Agent 
> communication.
> 
> From a security/segregation point of view, would vhost separation on our 
> existing RabbitMQ cluster be sufficient? Or is it recommended to have an 
> entirely separate cluster?
> 
> As you can imagine, I'd like to avoid having to manage *two* RabbitMQ 
> clusters. :)
> 
> Thanks,
> Joe
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-17 Thread Abel Lopez
I want to imagine that separate vhosts with different usernames and
appropriate permissions would be sufficient. Just like we don't run
separate MySQL instances for a different database, we just make users with
permissions.

But, I haven't played with Murano at all, what do I know.

On Friday, September 16, 2016, Joe Topjian  wrote:

> Hi all,
>
> We're planning to deploy Murano to one of our OpenStack clouds and I'm
> debating the RabbitMQ setup.
>
> For background: the Murano agent that runs on instances requires access to
> RabbitMQ. Murano is able to be configured with two RabbitMQ services: one
> for traditional OpenStack communication and one for the Murano/Agent
> communication.
>
> From a security/segregation point of view, would vhost separation on our
> existing RabbitMQ cluster be sufficient? Or is it recommended to have an
> entirely separate cluster?
>
> As you can imagine, I'd like to avoid having to manage *two* RabbitMQ
> clusters. :)
>
> Thanks,
> Joe
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Murano in Production

2016-09-16 Thread Joe Topjian
Hi all,

We're planning to deploy Murano to one of our OpenStack clouds and I'm
debating the RabbitMQ setup.

For background: the Murano agent that runs on instances requires access to
RabbitMQ. Murano is able to be configured with two RabbitMQ services: one
for traditional OpenStack communication and one for the Murano/Agent
communication.

>From a security/segregation point of view, would vhost separation on our
existing RabbitMQ cluster be sufficient? Or is it recommended to have an
entirely separate cluster?

As you can imagine, I'd like to avoid having to manage *two* RabbitMQ
clusters. :)

Thanks,
Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators