Hi Serg,
We were indeed hitting that bug, but the cert wasn't self-signed. It was
easier for us to manually patch the Ubuntu Cloud package of Murano with the
stable/mitaka fix linked in that bug report than trying to debug where
OpenSSL/python/requests/etc was going awry.
We might redeploy Murano
Hi Joe,
>Also, is it safe to say that communication between agent/engine only, and will
>only, happen during app deployment?
murano-agent & murano-engine keep active connection to the Rabbit MQ
broker but message exchange happens only during deployment of the app.
>One thing we just ran into, t
In Fuel we deploy haproxy to all of the nodes that are part of the
VIP/endpoint service (This is usually part of the controller role) Then the
vips (internal or public) can be active on any member of the group.
Corosync/Pacemaker is used to move the VIP address (as apposed to
keepalived) in our cas
Hi Serg,
Thank you for sharing this information :)
If I'm understanding correctly, the main reason you're using a
non-clustered / corosync setup is because that's how most other components
in Mirantis OpenStack are configured? Is there anything to be aware of in
how Murano communicates over the a
Would you mind sharing an example snippet from HA proxy config? I had
struggled in the past with getting this part to work.
> On Sep 23, 2016, at 12:13 AM, Serg Melikyan wrote:
>
> Hi Joe,
>
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack co
Other that #1 that's exactly the same design we used for Trove. Glad to see
someone else using it too for validation. Thanks.
On Sep 22, 2016 11:39 PM, "Serg Melikyan" wrote:
> Hi Joe,
>
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack configurat
Kris,
if I understand correctly we use pacemaker/corosync to manage our
cluster. When primary controller is detected as failed - pacemaker
updates HAProxy configuration to point to the new primary controller.
I don't know all details regarding HA Proxy and how HA is made in that
case, I've added
How are you having ha proxy pointing to the current primary controller? Is
this done automatically or are you manually setting a server as the master?
Sent from my iPad
> On Sep 23, 2016, at 5:17 AM, Serg Melikyan wrote:
>
> Hi Joe,
>
> I can share some details on how murano is configured as
Hi Joe,
I can share some details on how murano is configured as part of the
default Mirantis OpenStack configuration and try to explain why it's
done in that way as it's done, I hope it helps you in your case.
As part of Mirantis OpenStack second instance of the RabbitMQ is
getting deployed speci
Good call.
I think Matt bringing up Trove is worthwhile, too. If we were to consider
deploying Trove in the future, and now that I've learned it also has an
agent/rabbit setup, there's definitely more weight behind a second
agent-only Rabbit cluster.
On Sun, Sep 18, 2016 at 9:15 PM, Sam Morrison
You could also use https://www.rabbitmq.com/maxlength.html to mitigate
overflowing on the trove vhost side.
Sam
> On 19 Sep 2016, at 1:07 PM, Joe Topjian wrote:
>
> Thanks for everyone's input. I think I'm going to go with a single Rabbit
> cluster and separate by vhosts. Our environment is
Thanks for everyone's input. I think I'm going to go with a single Rabbit
cluster and separate by vhosts. Our environment is nowhere as large as
NeCTAR or TWC, so I can definitely understand concern about Rabbit blowing
the cloud up. We can be a little bit more flexible.
As a precaution, though, I
I'd love to see your results on this . Very interesting stuff.
On Sep 17, 2016 1:37 AM, "Joe Topjian" wrote:
> Hi all,
>
> We're planning to deploy Murano to one of our OpenStack clouds and I'm
> debating the RabbitMQ setup.
>
> For background: the Murano agent that runs on instances requires a
+1 This was our concern also with Trove. If a tenant DoSes Trove we
probably don't all get fired. The rest of rabbit is just too important to
risk sharing.
On Sun, Sep 18, 2016 at 6:53 PM, Sam Morrison wrote:
> We run completely separate clusters. I’m sure vhosts give you acceptable
> security b
We run completely separate clusters. I’m sure vhosts give you acceptable
security but it means also sharing disk and ram which means if something went
awry and generated lots of messages etc. it could take your whole rabbit
cluster down.
Sam
> On 17 Sep 2016, at 3:34 PM, Joe Topjian wrote:
>
I want to imagine that separate vhosts with different usernames and
appropriate permissions would be sufficient. Just like we don't run
separate MySQL instances for a different database, we just make users with
permissions.
But, I haven't played with Murano at all, what do I know.
On Friday, Sept
Hi all,
We're planning to deploy Murano to one of our OpenStack clouds and I'm
debating the RabbitMQ setup.
For background: the Murano agent that runs on instances requires access to
RabbitMQ. Murano is able to be configured with two RabbitMQ services: one
for traditional OpenStack communication
17 matches
Mail list logo