Hi list,

last week we upgraded our Mitaka cloud to Ocata (via Newton, of course) with ceph backend, and also upgraded the cloud nodes from openSUSE Leap 42.1 to Leap 42.3. There were some issues as expected, but no showstoppers (luckily). So the cloud is up and working again, but our monitoring shows a high CPU load for cinder-volume service on the control node. But since all the clients are on the compute nodes we are wondering what cinder actually does on the control node except initializing the connections of course. I captured a tcpdump on control node and saw a lot of connections to the ceph nodes, the data contains all these rbd_header files, e.g. rb.0.24d5b04[...]. I expect this kind of traffic on the compute nodes, of course, but why does the control node also establish so many connections?

I'd appreciate any insight!

Regards,
Eugen

--
Eugen Block                             voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg                         e-mail  : ebl...@nde.ag

        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to