Hi,
> Wiadomość napisana przez Matt Riedemann w dniu
> 03.06.2018, o godz. 16:54:
>
> On 6/2/2018 1:37 AM, Chris Apsey wrote:
>> This is great. I would even go so far as to say the install docs should be
>> updated to capture this as the default; as far as I know there is no
>> negative
On 6/2/2018 1:37 AM, Chris Apsey wrote:
This is great. I would even go so far as to say the install docs should
be updated to capture this as the default; as far as I know there is no
negative impact when running in daemon mode, even on very small
deployments. I would imagine that there are
This is great. I would even go so far as to say the install docs should be
updated to capture this as the default; as far as I know there is no
negative impact when running in daemon mode, even on very small
deployments. I would imagine that there are operators out there who have
run into
On 5/30/2018 9:30 AM, Matt Riedemann wrote:
I can start pushing some docs patches and report back here for review help.
Here are the docs patches in both nova and neutron:
https://review.openstack.org/#/q/topic:bug/1774217+(status:open+OR+status:merged)
--
Thanks,
Matt
On 5/29/2018 8:23 PM, Chris Apsey wrote:
I want to echo the effectiveness of this change - we had vif failures
when launching more than 50 or so cirros instances simultaneously, but
moving to daemon mode made this issue disappear and we've tested 5x that
amount. This has been the single
Hi,
just to let you know. Problem is now gone. Instances boot up with working
network interface.
Thanks a lot,
Radu
On Tue, 2018-05-29 at 21:23 -0400, Chris Apsey wrote:
I want to echo the effectiveness of this change - we had vif failures when
launching more than 50 or so cirros instances
I want to echo the effectiveness of this change - we had vif failures when
launching more than 50 or so cirros instances simultaneously, but moving to
daemon mode made this issue disappear and we've tested 5x that amount.
This has been the single biggest scalability improvement to date. This
Glad to hear it!
Always monitor rabbitmq queues to identify bottlenecks !! :)
Cheers
Saverio
Il gio 24 mag 2018, 11:07 Radu Popescu | eMAG, Technology <
radu.pope...@emag.ro> ha scritto:
> Hi,
>
> did the change yesterday. Had no issue this morning with neutron not being
> able to move fast
Hi,
did the change yesterday. Had no issue this morning with neutron not being able
to move fast enough. Still, we had some storage issues, but that's another
thing.
Anyway, I'll leave it like this for the next few days and report back in case I
get the same slow neutron errors.
Thanks a lot!
Hi,
actually, I didn't know about that option. I'll enable it right now.
Testing is done every morning at about 4:00AM ..so I'll know tomorrow morning
if it changed anything.
Thanks,
Radu
On Tue, 2018-05-22 at 15:30 +0200, Saverio Proto wrote:
Sorry email went out incomplete.
Read this:
Sorry email went out incomplete.
Read this:
https://cloudblog.switch.ch/2017/08/28/starting-1000-instances-on-switchengines/
make sure that Openstack rootwrap configured to work in daemon mode
Thank you
Saverio
2018-05-22 15:29 GMT+02:00 Saverio Proto :
> Hello Radu,
>
>
Hello Radu,
do you have the Openstack rootwrap configured to work in daemon mode ?
please read this article:
2018-05-18 10:21 GMT+02:00 Radu Popescu | eMAG, Technology
:
> Hi,
>
> so, nova says the VM is ACTIVE and actually boots with no network. We are
> setting some
Hi,
so, nova says the VM is ACTIVE and actually boots with no network. We are
setting some metadata that we use later on and have cloud-init for different
tasks.
So, VM is up, OS is running, but network is working after a random amount of
time, that can get to around 45 minutes. Thing is, is
We have other scheduled tests that perform end-to-end (assign floating IP,
ssh, ping outside) and never had an issue.
I think we turned it off because the callback code was initially buggy and
nova would wait forever while things were in fact ok, but I'll change
"vif_plugging_is_fatal = True" and
On 5/17/2018 9:46 AM, George Mihaiescu wrote:
and large rally tests of 500 instances complete with no issues.
Sure, except you can't ssh into the guests.
The whole reason the vif plugging is fatal and timeout and callback code
was because the upstream CI was unstable without it. The server
We use "vif_plugging_is_fatal = False" and "vif_plugging_timeout = 0" as
well as "no-ping" in the dnsmasq-neutron.conf, and large rally tests of 500
instances complete with no issues.
These are some good blogposts about Neutron performance:
Hi,
unfortunately, didn't get the reply in my inbox, so I'm answering from the link
here:
http://lists.openstack.org/pipermail/openstack-operators/2018-May/015270.html
(hopefully, my reply will go to the same thread)
Anyway, I can see the neutron openvswitch agent logs processing the interface
On 5/16/2018 10:30 AM, Radu Popescu | eMAG, Technology wrote:
but I can see nova attaching the interface after a huge amount of time.
What specifically are you looking for in the logs when you see this?
Are you passing pre-created ports to attach to nova or are you passing a
network ID so
Hi all,
we have the following setup:
- Openstack Ocata deployed with Openstack Ansible (v15.1.7)
- 66 compute nodes, each having between 50 and 150 VMs, depending on their
hardware configuration
- we don't use Ceilometer (so not adding extra load on RabbitMQ cluster)
- using Openvswitch HA with
19 matches
Mail list logo