Hi,Guys:
I am using ceph as my cloudstack cluster storage, But I can not add ceph RBD pool to my cloudstack cluster. The related info is as below: The label.rados.monitor is 10.29.44.1:6789 , The kvm node and the v-xxx-VM and s-xxx-VM can access the ceph node. kvm001:~# telnet 10.29.44.1 6789 Trying 10.29.44.1... Connected to 10.29.44.1. Escape character is '^]'. ceph v027 , ell root@s-252-VM:~# telnet 10.29.44.1 6789 Trying 10.29.44.1... Connected to 10.29.44.1. Escape character is '^]'. ceph v027 , The RBD pool is exists: cloudstack The user cloudstack has the access to the RBD pool: cloudstack The secret is 'AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==' that has no / (slash) in the secret. The KVM node has ceph config: ceph.client.admin.keyring. # ls /etc/ceph/ ceph.client.admin.keyring ceph.client.cloudstack.keyring ceph.conf rbdmap The related info and The libvirt secrets has nothing: # virsh pool-list Name State Autostart ------------------------------------------------------------ 60b59087-7c53-3058-a50c-f50737e556bc active no c4355ed4-8833-381f-b3f7-2981782ee3fa active no c8e9ca6a-c004-3851-a074-19f4948b28ff active no d8dabcb0-1a57-4e13-8a82-339b2052dec1 active no # virsh secret-list UUID Usage --------------- # ls -a /etc/libvirt/secrets/ . .. But I can not find the storage pool d8dabcb0-1a57-4e13-8a82-339b2052dec1 on cloudstack UI and the storage pool d8dabcb0-1a57-4e13-8a82-339b2052dec1 will change when I reclick the add primary stotage button. After checked all the config, I restart the management-server and cloudstack-agent services. The error is still the same: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory 2021-12-06 17:54:17,921 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) <pool type='rbd'> <name>b90eae9d-973c-362c-8afc-af88f0743892</name> <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid> <source> <host name='10.29.44.1' port='6789'/> <name>cloudstack</name> <auth username='cloudstack' type='ceph'> <secret uuid='b90eae9d-973c-362c-8afc-af88f0743892'/> </auth> </source> </pool> 2021-12-06 17:54:39,461 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/network/security_group.py get_rule_logs_for_vms 2021-12-06 17:54:39,463 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing while with timeout : 1800000 2021-12-06 17:54:39,534 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Execution is successful. 2021-12-06 17:54:39,535 DEBUG [kvm.resource.LibvirtConnection] (UgentTask-5:null) (logid:) Looking for libvirtd connection at: qemu:///system 2021-12-06 17:54:39,551 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) Sending ping: Seq 15-6: { Cmd , MgmtId: -1, via: 15, Ver: v1, Flags: 11, [{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"v-255-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"v-249-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"s-250-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"r-254-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"}}] } 2021-12-06 17:54:39,620 DEBUG [cloud.agent.Agent] (Agent-Handler-2:null) (logid:) Received response: Seq 15-6: { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"},"result":"true","wait":"0","bypassHostMaintenance":"false"}}] } 2021-12-06 17:54:47,958 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory 2021-12-06 17:54:47,959 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create the RBD storage pool, cleaning up the libvirt secret 2021-12-06 17:54:47,961 WARN [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Caught: com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892 at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645) at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329) at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323) at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42) at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35) at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78) at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648) at com.cloud.agent.Agent.processRequest(Agent.java:661) at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079) at com.cloud.utils.nio.Task.call(Task.java:83) at com.cloud.utils.nio.Task.call(Task.java:29) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2021-12-06 17:54:47,966 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Seq 15-6627046851675684885: { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892 at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645) at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329) at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323) at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42) at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35) at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78) at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648) at com.cloud.agent.Agent.processRequest(Agent.java:661) at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079) at com.cloud.utils.nio.Task.call(Task.java:83) at com.cloud.utils.nio.Task.call(Task.java:29) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) ","wait":"0","bypassHostMaintenance":"false"}}] } 2021-12-06 17:54:52,709 DEBUG [kvm.resource.LibvirtConnection] (Thread-6:null) (logid:) Looking for libvirtd connection at: qemu:///system 2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Found NFS storage pool c8e9ca6a-c004-3851-a074-19f4948b28ff in libvirt, continuing 2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 10.26.246.6 -p /kvm-data -m /mnt/c8e9ca6a-c004-3851-a074-19f4948b28ff -h 10.26.246.6 2021-12-06 17:54:52,726 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing while with timeout : 60000 2021-12-06 17:54:52,737 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Execution is successful. 1. Can you give some test scripts or methords that can test the ceph RBD storage is ok on the cloudstack side ? 2. Could you please give some advices how to handle this problem ? If need more info Please contact me. -- 缘来是你。