tuanhoangth1603 commented on issue #12096:
URL: https://github.com/apache/cloudstack/issues/12096#issuecomment-3582038701

   @weizhouapache Today I just discovered a strange issue as follows, which is 
related to this issue.
   I had to correct a wrong slave interface name inside a bond. After editing 
the netplan YAML and running `netplan apply`, the bond came up correctly and 
network was fully restored.
   However, immediately after that, CloudStack permanently lost the Ceph RBD 
primary storage mount with this repeated agent error:
   ```
   Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to 
create the RBD IoCTX. Does the pool 'cloudstack-prod' exist?: No such file or 
directory
   ```
   While others agent still connected normally so I think the issue is not on 
the Ceph side.
   Interestingly, if I reboot the entire host, the problem disappears and the 
RBD pool is mounted normally again.
   Maybe, does a simple netplan apply (which only briefly interrupts the 
network) permanently break libvirt’s ability to create the RBD pool, while a 
full host reboot fixes it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to