Re: [Openstack-operators] Murano in Production

2016-09-23 Thread Joe Topjian
Hi Serg,

Thank you for sharing this information :)

If I'm understanding correctly, the main reason you're using a
non-clustered / corosync setup is because that's how most other components
in Mirantis OpenStack are configured? Is there anything to be aware of in
how Murano communicates over the agent/engine rmq in a clustered rmq setup?

Also, is it safe to say that communication between agent/engine only, and
will only, happen during app deployment? Meaning, if the rmq server goes
down (let's even say it goes away permanently for exaggeration), short of
some errors in the agent log, nothing else bad will come out of it?

With regard to a different port and a publicly accessible address, I agree
and we'll be deploying this same way.

One thing we just ran into, though, was getting the agent/engine rmq config
to work with SSL. For some reason the murano/openstack configuration (done
via oslo) had no problems recognizing our SSL cert, but the agent/engine
did not like it at all. The Ubuntu Cloud packages have not been updated for
a bit so we ended up patching for the "insecure" option both in engine and
agent templates (btw: very nice that the agent can be installed via
cloud-init -- I really didn't want to manage a second set of images just to
have the agent pre-installed).

Thank you again,
Joe

On Thu, Sep 22, 2016 at 10:13 PM, Serg Melikyan 
wrote:

> Hi Joe,
>
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack configuration and try to explain why it's
> done in that way as it's done, I hope it helps you in your case.
>
> As part of Mirantis OpenStack second instance of the RabbitMQ is
> getting deployed specially for the murano, but it's configuration is
> different than for the RabbitMQ instance used by the other OpenStack
> components.
>
> Why to use separate instance of the RabbitMQ?
>  1. Prevent possibility to get access to the RabbitMQ supporting
> whole cloud infrastructure by limiting access on the networking level
> rather than rely on authentication/authorization
>  2. Prevent possibility of DDoS by limiting access on the
> networking level to the infrastructure RabbitMQ
>
> Given that second RabbitMQ instance is used only for the murano-agent
> <-> murano-engine communications and murano-agent is running on the
> VMs we had to make couple of changes in the deployment of the RabbitMQ
> (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> for m-agent <-> m-engine communications):
>
> 1. RabbitMQ is not clustered, just separate instance running on each
> controller node
> 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are
> exposed
> 3. It's has different port number than default
> 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> pointing to the RabbitMQ on the current primary controller
>
> Note: How murano-agent is working? Murano-engine creates queue with
> uniq name and put configuration tasks to that queue which are later
> getting picked up by murano-agent when VM is booted and murano-agent
> is configured to use created queue through cloud-init.
>
> #1 Clustering
>
> * Given that per 1 app deployment from we create 1-N VMs and send 1-M
> configuration tasks, where in most of the cases N and M are less than
> 3.
> * Even if app deployment will be failed due to cluster failover it's
> can be always re-deployed by the user.
> * Controller-node failover most probably will lead to limited
> accessibility of the Heat, Nova & Neutron API and application
> deployment will fail regardless of the not executing configuration
> task on the VM.
>
> #2 Exposure on the Public VIP
>
> One of the reasons behind choosing RabbitMQ as transport for
> murano-agent communications was connectivity from the VM - it's much
> easier to implement connectivity *from* the VM than *to* VM.
>
> But even in the case when you are connecting to the broker from the VM
> you should have connectivity and public interface where all other
> OpenStack APIs are exposed is most natural way to do that.
>
> #3 Different from the default port number
>
> Just to avoid confusion from the RabbitMQ used for the infrastructure,
> even given that they are on the different networks.
>
> #4 HAProxy
>
> In case of the default Mirantis OpenStack configuration is used mostly
> to support non-clustered RabbitMQ setup and exposure on the Public
> VIP, but also helpful in case of more complicated setups.
>
> P.S. I hope my answers helped, let me know if I can cover something in
> more details.
> --
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Public cloud operators group in

2016-09-23 Thread Rochelle Grober
Hi Matt,



At considerable risk of heading down a rabbit hole... how are you defining 
"public" cloud for these purposes?



Cheers,

Blair

Any cloud that provides a cloud to a thirdparty in exchange for money.  So, 
rent a VM, rent a collection of vms, lease a fully operational cloud spec'ed to 
your requirements, lease a team and HW with your cloud on them.

So any cloud that provides offsite IAAS to lessees

--Rockyy
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Evacuate host with host ephemeral storage

2016-09-23 Thread Marcin Iwinski
Hi,
double-check if "nova migrate" option will work for you - it uses a
different mechanisms than live-migration and judging by [1] depending on
the release of OpenStack you use it might work with LVM backed ephemeral
storage. And I seccond Kostiantyn email - we seems to be mixing evacuation
with migration.


[1] https://bugs.launchpad.net/nova/+bug/1270305

Regards,
Marcin


On 23 Sep 2016 at 16:21:17, Davíð Örn Jóhannsson (davi...@siminn.is) wrote:

Thank you for the clarification. My digging around has thus far only
revealed https://bugs.launchpad.net/nova/+bug/1499405 (live migration is
not implemented for LVM backed storage)

If any one has any more info on this subject it would be much appreciated.

From: "kostiantyn.volenbovs...@swisscom.com"
Date: Friday 23 September 2016 at 13:59
To: David Orn Johannsson, "marcin.iwin...@gmail.com", "
openstack-operators@lists.openstack.org"
Subject: RE: [Openstack-operators] Evacuate host with host ephemeral storage

Hi,



here migration and evacuation are getting mixed up.

In migration case you can access the ephemeral storage of your VM and thus
you will copy that disk=that file (either as offline aka ‘cold’ migration
or via live migration)

In evacuation case your Compute Host is either unavailable (or
assumed-to-be-unavailable) and thus you can’t access (or assume that you
can’t access) whatever is stored on that Compute Host

So in case your emphemeral (=root disk of VM) disk is actually on that
Compute Host – then you can’t access that and thus evacuation will result
in rebuild

(=taking original image from Glance and thus typically you lose whatever
happened after initial booting)



But in case you have something shared underneath (like NFS) – then
--on-shared-storage

nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage

(I guess that it will detect that automatically even in case)

But LVM using NFS share – that sounds like something not
very-straightforward (not sure if it is OK with OpenStack)



See [1] and [2]

BR,

Konstantin

[1]
http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations

[2] http://docs.openstack.org/admin-guide/cli-nova-evacuate.html

*From:* Davíð Örn Jóhannsson [mailto:davi...@siminn.is ]
*Sent:* Friday, September 23, 2016 2:12 PM
*To:* Marcin Iwinski ;
openstack-operators@lists.openstack.org
*Subject:* Re: [Openstack-operators] Evacuate host with host ephemeral
storage



No I have not, I guess there is nothing else to do than just give it a go :)



Thanks for the pointer



*From: *Marcin Iwinski
*Date: *Friday 23 September 2016 at 11:39
*To: *David Orn Johannsson, "openstack-operators@lists.openstack.org"
*Subject: *Re: [Openstack-operators] Evacuate host with host ephemeral
storage





On 23 Sep 2016 at 13:25:39, Davíð Örn Jóhannsson (davi...@siminn.is) wrote:

OpenStack Liberty

Ubuntu 14.04



I know that using block storage like Cinder you can evacuate instances from
hosts, but in my case we are not yet using Cinder or other block storage
solutions, we rely on local ephemeral storage, configured on using LVM



Nova.conf

[libvirt]

images_volume_group=vg_ephemeral

images_type=lvm



Is it possible to evacuate (migrate) ephemeral instances from compute hosts
and if so does any one have any experience with that?





Hi Davíð



Have you actually tried the regular "nova migrate UUID" option? It does
copy the entire disk to a different compute - but i'm not sure if it's
working with LVM. I've also used [1] ("nova live-migrate --block-migrate
UUID") in the past - but unfortunately this also wasn't LVM backed
ephemeral storage.



[1]
http://www.tcpcloud.eu/en/blog/2014/11/20/block-live-migration-openstack-environment/



BR

Marcin
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Evacuate host with host ephemeral storage

2016-09-23 Thread Davíð Örn Jóhannsson
Thank you for the clarification. My digging around has thus far only revealed 
https://bugs.launchpad.net/nova/+bug/1499405 (live migration is not implemented 
for LVM backed storage)

If any one has any more info on this subject it would be much appreciated.

From: 
"kostiantyn.volenbovs...@swisscom.com"
Date: Friday 23 September 2016 at 13:59
To: David Orn Johannsson, 
"marcin.iwin...@gmail.com", 
"openstack-operators@lists.openstack.org"
Subject: RE: [Openstack-operators] Evacuate host with host ephemeral storage

Hi,

here migration and evacuation are getting mixed up.
In migration case you can access the ephemeral storage of your VM and thus you 
will copy that disk=that file (either as offline aka ‘cold’ migration or via 
live migration)
In evacuation case your Compute Host is either unavailable (or 
assumed-to-be-unavailable) and thus you can’t access (or assume that you can’t 
access) whatever is stored on that Compute Host
So in case your emphemeral (=root disk of VM) disk is actually on that Compute 
Host – then you can’t access that and thus evacuation will result in rebuild
(=taking original image from Glance and thus typically you lose whatever 
happened after initial booting)

But in case you have something shared underneath (like NFS) – then 
--on-shared-storage
nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage
(I guess that it will detect that automatically even in case)
But LVM using NFS share – that sounds like something not very-straightforward 
(not sure if it is OK with OpenStack)

See [1] and [2]
BR,
Konstantin
[1] 
http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations
[2] http://docs.openstack.org/admin-guide/cli-nova-evacuate.html
From: Davíð Örn Jóhannsson [mailto:davi...@siminn.is]
Sent: Friday, September 23, 2016 2:12 PM
To: Marcin Iwinski >; 
openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage

No I have not, I guess there is nothing else to do than just give it a go :)

Thanks for the pointer

From: Marcin Iwinski
Date: Friday 23 September 2016 at 11:39
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage



On 23 Sep 2016 at 13:25:39, Davíð Örn Jóhannsson 
(davi...@siminn.is) wrote:
OpenStack Liberty
Ubuntu 14.04

I know that using block storage like Cinder you can evacuate instances from 
hosts, but in my case we are not yet using Cinder or other block storage 
solutions, we rely on local ephemeral storage, configured on using LVM

Nova.conf
[libvirt]
images_volume_group=vg_ephemeral
images_type=lvm

Is it possible to evacuate (migrate) ephemeral instances from compute hosts and 
if so does any one have any experience with that?


Hi Davíð

Have you actually tried the regular "nova migrate UUID" option? It does copy 
the entire disk to a different compute - but i'm not sure if it's working with 
LVM. I've also used [1] ("nova live-migrate --block-migrate UUID") in the past 
- but unfortunately this also wasn't LVM backed ephemeral storage.

[1] 
http://www.tcpcloud.eu/en/blog/2014/11/20/block-live-migration-openstack-environment/

BR
Marcin

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Evacuate host with host ephemeral storage

2016-09-23 Thread Kostiantyn.Volenbovskyi
Hi,

here migration and evacuation are getting mixed up.
In migration case you can access the ephemeral storage of your VM and thus you 
will copy that disk=that file (either as offline aka ‘cold’ migration or via 
live migration)
In evacuation case your Compute Host is either unavailable (or 
assumed-to-be-unavailable) and thus you can’t access (or assume that you can’t 
access) whatever is stored on that Compute Host
So in case your emphemeral (=root disk of VM) disk is actually on that Compute 
Host – then you can’t access that and thus evacuation will result in rebuild
(=taking original image from Glance and thus typically you lose whatever 
happened after initial booting)

But in case you have something shared underneath (like NFS) – then 
--on-shared-storage
nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage
(I guess that it will detect that automatically even in case)
But LVM using NFS share – that sounds like something not very-straightforward 
(not sure if it is OK with OpenStack)

See [1] and [2]
BR,
Konstantin
[1] 
http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations
[2] http://docs.openstack.org/admin-guide/cli-nova-evacuate.html
From: Davíð Örn Jóhannsson [mailto:davi...@siminn.is]
Sent: Friday, September 23, 2016 2:12 PM
To: Marcin Iwinski ; 
openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage

No I have not, I guess there is nothing else to do than just give it a go :)

Thanks for the pointer

From: Marcin Iwinski
Date: Friday 23 September 2016 at 11:39
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage



On 23 Sep 2016 at 13:25:39, Davíð Örn Jóhannsson 
(davi...@siminn.is) wrote:
OpenStack Liberty
Ubuntu 14.04

I know that using block storage like Cinder you can evacuate instances from 
hosts, but in my case we are not yet using Cinder or other block storage 
solutions, we rely on local ephemeral storage, configured on using LVM

Nova.conf
[libvirt]
images_volume_group=vg_ephemeral
images_type=lvm

Is it possible to evacuate (migrate) ephemeral instances from compute hosts and 
if so does any one have any experience with that?


Hi Davíð

Have you actually tried the regular "nova migrate UUID" option? It does copy 
the entire disk to a different compute - but i'm not sure if it's working with 
LVM. I've also used [1] ("nova live-migrate --block-migrate UUID") in the past 
- but unfortunately this also wasn't LVM backed ephemeral storage.

[1] 
http://www.tcpcloud.eu/en/blog/2014/11/20/block-live-migration-openstack-environment/

BR
Marcin

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-23 Thread Matt Fischer
Other that #1 that's exactly the same design we used for Trove. Glad to see
someone else using it too for validation. Thanks.

On Sep 22, 2016 11:39 PM, "Serg Melikyan"  wrote:

> Hi Joe,
>
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack configuration and try to explain why it's
> done in that way as it's done, I hope it helps you in your case.
>
> As part of Mirantis OpenStack second instance of the RabbitMQ is
> getting deployed specially for the murano, but it's configuration is
> different than for the RabbitMQ instance used by the other OpenStack
> components.
>
> Why to use separate instance of the RabbitMQ?
>  1. Prevent possibility to get access to the RabbitMQ supporting
> whole cloud infrastructure by limiting access on the networking level
> rather than rely on authentication/authorization
>  2. Prevent possibility of DDoS by limiting access on the
> networking level to the infrastructure RabbitMQ
>
> Given that second RabbitMQ instance is used only for the murano-agent
> <-> murano-engine communications and murano-agent is running on the
> VMs we had to make couple of changes in the deployment of the RabbitMQ
> (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> for m-agent <-> m-engine communications):
>
> 1. RabbitMQ is not clustered, just separate instance running on each
> controller node
> 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are
> exposed
> 3. It's has different port number than default
> 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> pointing to the RabbitMQ on the current primary controller
>
> Note: How murano-agent is working? Murano-engine creates queue with
> uniq name and put configuration tasks to that queue which are later
> getting picked up by murano-agent when VM is booted and murano-agent
> is configured to use created queue through cloud-init.
>
> #1 Clustering
>
> * Given that per 1 app deployment from we create 1-N VMs and send 1-M
> configuration tasks, where in most of the cases N and M are less than
> 3.
> * Even if app deployment will be failed due to cluster failover it's
> can be always re-deployed by the user.
> * Controller-node failover most probably will lead to limited
> accessibility of the Heat, Nova & Neutron API and application
> deployment will fail regardless of the not executing configuration
> task on the VM.
>
> #2 Exposure on the Public VIP
>
> One of the reasons behind choosing RabbitMQ as transport for
> murano-agent communications was connectivity from the VM - it's much
> easier to implement connectivity *from* the VM than *to* VM.
>
> But even in the case when you are connecting to the broker from the VM
> you should have connectivity and public interface where all other
> OpenStack APIs are exposed is most natural way to do that.
>
> #3 Different from the default port number
>
> Just to avoid confusion from the RabbitMQ used for the infrastructure,
> even given that they are on the different networks.
>
> #4 HAProxy
>
> In case of the default Mirantis OpenStack configuration is used mostly
> to support non-clustered RabbitMQ setup and exposure on the Public
> VIP, but also helpful in case of more complicated setups.
>
> P.S. I hope my answers helped, let me know if I can cover something in
> more details.
> --
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Evacuate host with host ephemeral storage

2016-09-23 Thread Davíð Örn Jóhannsson
No I have not, I guess there is nothing else to do than just give it a go :)

Thanks for the pointer

From: Marcin Iwinski
Date: Friday 23 September 2016 at 11:39
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage



On 23 Sep 2016 at 13:25:39, Davíð Örn Jóhannsson 
(davi...@siminn.is) wrote:

OpenStack Liberty
Ubuntu 14.04

I know that using block storage like Cinder you can evacuate instances from 
hosts, but in my case we are not yet using Cinder or other block storage 
solutions, we rely on local ephemeral storage, configured on using LVM

Nova.conf
[libvirt]
images_volume_group=vg_ephemeral
images_type=lvm

Is it possible to evacuate (migrate) ephemeral instances from compute hosts and 
if so does any one have any experience with that?


Hi Davíð

Have you actually tried the regular "nova migrate UUID" option? It does copy 
the entire disk to a different compute - but i'm not sure if it's working with 
LVM. I've also used [1] ("nova live-migrate --block-migrate UUID") in the past 
- but unfortunately this also wasn't LVM backed ephemeral storage.

[1] 
http://www.tcpcloud.eu/en/blog/2014/11/20/block-live-migration-openstack-environment/

BR
Marcin

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Massively Distributed] Working sessions in Barcelona.

2016-09-23 Thread lebre . adrien
Dear all

The Massively Distributed Cloud Working Group [1] will meet on Thursday morning 
in the Barcelona Summit. 
We have three slots. The current proposal is to discuss distributed 
clouds/Fog/Edge use-cases and requirements during the two first ones and to use 
the last one to define concrete actions for the Ocata cycle. 

If you are interested by taking part to the exchanges, please give a look at 
[2] and do not hesitate to complete/amend the agenda proposal. 

Thanks,  
Ad_rien_ 

[1] https://wiki.openstack.org/wiki/Massively_Distributed_Clouds
[2] 
https://etherpad.openstack.org/p/massively_distribute-barcelona_working_sessions

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Evacuate host with host ephemeral storage

2016-09-23 Thread Marcin Iwinski
On 23 Sep 2016 at 13:25:39, Davíð Örn Jóhannsson (davi...@siminn.is) wrote:

OpenStack Liberty
Ubuntu 14.04

I know that using block storage like Cinder you can evacuate instances from
hosts, but in my case we are not yet using Cinder or other block storage
solutions, we rely on local ephemeral storage, configured on using LVM

Nova.conf
[libvirt]
images_volume_group=vg_ephemeral
images_type=lvm

Is it possible to evacuate (migrate) ephemeral instances from compute hosts
and if so does any one have any experience with that?


Hi Davíð

Have you actually tried the regular "nova migrate UUID" option? It does
copy the entire disk to a different compute - but i'm not sure if it's
working with LVM. I've also used [1] ("nova live-migrate --block-migrate
UUID") in the past - but unfortunately this also wasn't LVM backed
ephemeral storage.

[1]
http://www.tcpcloud.eu/en/blog/2014/11/20/block-live-migration-openstack-environment/

BR
Marcin
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-23 Thread Serg Melikyan
Kris,

if I understand correctly we use pacemaker/corosync to manage our
cluster. When primary controller is detected as failed - pacemaker
updates HAProxy configuration to point to the new primary controller.

I don't know all details regarding HA Proxy and how HA is made in that
case, I've added Andrew who might share more details regarding that.
Andrew can you help here?


On Fri, Sep 23, 2016 at 12:11 AM, Kris G. Lindgren
 wrote:
> How are you having ha proxy pointing to the current primary controller?  Is 
> this done automatically or are you manually setting a server as the master?
>
> Sent from my iPad
>
>> On Sep 23, 2016, at 5:17 AM, Serg Melikyan  wrote:
>>
>> Hi Joe,
>>
>> I can share some details on how murano is configured as part of the
>> default Mirantis OpenStack configuration and try to explain why it's
>> done in that way as it's done, I hope it helps you in your case.
>>
>> As part of Mirantis OpenStack second instance of the RabbitMQ is
>> getting deployed specially for the murano, but it's configuration is
>> different than for the RabbitMQ instance used by the other OpenStack
>> components.
>>
>> Why to use separate instance of the RabbitMQ?
>> 1. Prevent possibility to get access to the RabbitMQ supporting
>> whole cloud infrastructure by limiting access on the networking level
>> rather than rely on authentication/authorization
>> 2. Prevent possibility of DDoS by limiting access on the
>> networking level to the infrastructure RabbitMQ
>>
>> Given that second RabbitMQ instance is used only for the murano-agent
>> <-> murano-engine communications and murano-agent is running on the
>> VMs we had to make couple of changes in the deployment of the RabbitMQ
>> (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
>> for m-agent <-> m-engine communications):
>>
>> 1. RabbitMQ is not clustered, just separate instance running on each
>> controller node
>> 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are exposed
>> 3. It's has different port number than default
>> 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
>> pointing to the RabbitMQ on the current primary controller
>>
>> Note: How murano-agent is working? Murano-engine creates queue with
>> uniq name and put configuration tasks to that queue which are later
>> getting picked up by murano-agent when VM is booted and murano-agent
>> is configured to use created queue through cloud-init.
>>
>> #1 Clustering
>>
>> * Given that per 1 app deployment from we create 1-N VMs and send 1-M
>> configuration tasks, where in most of the cases N and M are less than
>> 3.
>> * Even if app deployment will be failed due to cluster failover it's
>> can be always re-deployed by the user.
>> * Controller-node failover most probably will lead to limited
>> accessibility of the Heat, Nova & Neutron API and application
>> deployment will fail regardless of the not executing configuration
>> task on the VM.
>>
>> #2 Exposure on the Public VIP
>>
>> One of the reasons behind choosing RabbitMQ as transport for
>> murano-agent communications was connectivity from the VM - it's much
>> easier to implement connectivity *from* the VM than *to* VM.
>>
>> But even in the case when you are connecting to the broker from the VM
>> you should have connectivity and public interface where all other
>> OpenStack APIs are exposed is most natural way to do that.
>>
>> #3 Different from the default port number
>>
>> Just to avoid confusion from the RabbitMQ used for the infrastructure,
>> even given that they are on the different networks.
>>
>> #4 HAProxy
>>
>> In case of the default Mirantis OpenStack configuration is used mostly
>> to support non-clustered RabbitMQ setup and exposure on the Public
>> VIP, but also helpful in case of more complicated setups.
>>
>> P.S. I hope my answers helped, let me know if I can cover something in
>> more details.
>> --
>> Serg Melikyan, Development Manager at Mirantis, Inc.
>> http://mirantis.com | smelik...@mirantis.com
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Murano in Production

2016-09-23 Thread Kris G. Lindgren
How are you having ha proxy pointing to the current primary controller?  Is 
this done automatically or are you manually setting a server as the master?

Sent from my iPad

> On Sep 23, 2016, at 5:17 AM, Serg Melikyan  wrote:
> 
> Hi Joe,
> 
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack configuration and try to explain why it's
> done in that way as it's done, I hope it helps you in your case.
> 
> As part of Mirantis OpenStack second instance of the RabbitMQ is
> getting deployed specially for the murano, but it's configuration is
> different than for the RabbitMQ instance used by the other OpenStack
> components.
> 
> Why to use separate instance of the RabbitMQ?
> 1. Prevent possibility to get access to the RabbitMQ supporting
> whole cloud infrastructure by limiting access on the networking level
> rather than rely on authentication/authorization
> 2. Prevent possibility of DDoS by limiting access on the
> networking level to the infrastructure RabbitMQ
> 
> Given that second RabbitMQ instance is used only for the murano-agent
> <-> murano-engine communications and murano-agent is running on the
> VMs we had to make couple of changes in the deployment of the RabbitMQ
> (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> for m-agent <-> m-engine communications):
> 
> 1. RabbitMQ is not clustered, just separate instance running on each
> controller node
> 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are exposed
> 3. It's has different port number than default
> 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> pointing to the RabbitMQ on the current primary controller
> 
> Note: How murano-agent is working? Murano-engine creates queue with
> uniq name and put configuration tasks to that queue which are later
> getting picked up by murano-agent when VM is booted and murano-agent
> is configured to use created queue through cloud-init.
> 
> #1 Clustering
> 
> * Given that per 1 app deployment from we create 1-N VMs and send 1-M
> configuration tasks, where in most of the cases N and M are less than
> 3.
> * Even if app deployment will be failed due to cluster failover it's
> can be always re-deployed by the user.
> * Controller-node failover most probably will lead to limited
> accessibility of the Heat, Nova & Neutron API and application
> deployment will fail regardless of the not executing configuration
> task on the VM.
> 
> #2 Exposure on the Public VIP
> 
> One of the reasons behind choosing RabbitMQ as transport for
> murano-agent communications was connectivity from the VM - it's much
> easier to implement connectivity *from* the VM than *to* VM.
> 
> But even in the case when you are connecting to the broker from the VM
> you should have connectivity and public interface where all other
> OpenStack APIs are exposed is most natural way to do that.
> 
> #3 Different from the default port number
> 
> Just to avoid confusion from the RabbitMQ used for the infrastructure,
> even given that they are on the different networks.
> 
> #4 HAProxy
> 
> In case of the default Mirantis OpenStack configuration is used mostly
> to support non-clustered RabbitMQ setup and exposure on the Public
> VIP, but also helpful in case of more complicated setups.
> 
> P.S. I hope my answers helped, let me know if I can cover something in
> more details.
> -- 
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators