As restarts of OVS are service impacting, I was holding off on uploading
this fix until we had point releases for OVS - which we now do.
** Changed in: openvswitch (Ubuntu Xenial)
Status: Triaged => In Progress
** Also affects: cloud-archive
Importance: Undecided
Status: New
** Also affects: cloud-archive/mitaka
Importance: Undecided
Status: New
** Also affects: cloud-archive/ocata
Importance: Undecided
Status: New
** Also affects: cloud-archive/pike
Importance: Undecided
Status: New
** Changed in: cloud-archive
Status: New => Fix Released
** Changed in: cloud-archive/mitaka
Status: New => In Progress
** Changed in: cloud-archive/ocata
Status: New => In Progress
** Changed in: cloud-archive/pike
Status: New => In Progress
** Changed in: cloud-archive/mitaka
Importance: Undecided => Medium
** Changed in: cloud-archive/ocata
Importance: Undecided => Medium
** Changed in: cloud-archive/pike
Importance: Undecided => Medium
** Description changed:
- When there are a large number of routers and dhcp agents on a host, we
- see a syslog error repeated:
+ [Impact]
+ OpenStack environments running large numbers of routers and dhcp agents on a
single host can hit the NOFILES limit in OVS, resulting in broken operation of
virtual networking.
+
+ [Test Case]
+ Deploy openstack environment; create large number of virtual networks and
routers.
+ OVS will start to error with 'Too many open files'
+
+ [Regression Potential]
+ Minimal - we're just increasing the NOFILE limit via the systemd service
definition.
+
+ [Original Bug Report]
+ When there are a large number of routers and dhcp agents on a host, we see a
syslog error repeated:
"hostname ovs-vswitchd: ovs|1762125|netlink_socket|ERR|fcntl: Too many
open files"
If I check the number of filehandles owned by the pid for "ovs-vswitchd
unix:/var/run/openvswitch/db.sock" I see close to/at 65535 files.
If I then run the following, we double the limit and (in our case) saw
the count rise to >80000:
prlimit -p $pid --nofile=131070
We need to be able to:
- monitor via nrpe, if the process is running short on filehandles
- configure the limit so we have the option to not run out.
Currently, if I restart the process, we'll lose this setting.
Needless to say, openvswitch running out of filehandles causes all
manner of problems for services which use it.
** Changed in: openvswitch (Ubuntu Xenial)
Assignee: (unassigned) => James Page (james-page)
** Changed in: cloud-archive/pike
Assignee: (unassigned) => James Page (james-page)
** Changed in: cloud-archive/ocata
Assignee: (unassigned) => James Page (james-page)
** Changed in: cloud-archive/mitaka
Assignee: (unassigned) => James Page (james-page)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1737866
Title:
Too many open files when large number of routers on a host
To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1737866/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs