Hi Wido,

thank you for the quick response! The output of $ cpeh df on the admin node is:

[root@cephnode1 ~]# ceph df
--- RAW STORAGE ---
CLASS    SIZE   AVAIL     USED  RAW USED  %RAW USED
hdd    72 GiB  72 GiB  118 MiB   118 MiB       0.16
TOTAL  72 GiB  72 GiB  118 MiB   118 MiB       0.16

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS    USED  %USED  MAX AVAIL
device_health_metrics   1    1   31 KiB       12  94 KiB      0     23 GiB
cloudstack              2   32  2.9 KiB        5  43 KiB      0     23 GiB
MeinPool                3   32      0 B        0     0 B      0     23 GiB

Before I set up a Round Robin DNS, I have tried to use the IPs from the other 2 
ceph monitors (192.168.1.5 and 192.168.1.6). Still the same error.
Furthermore, I changed the log level on my kvm node to debug. Output:

2021-09-01 10:35:54,698 DEBUG [cloud.agent.Agent] (agentRequest-Handler-2:null) 
(logid:3c2b5d3a) Processing command: 
com.cloud.agent.api.ModifyStoragePoolCommand
2021-09-01 10:35:54,698 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Attempting to create storage 
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d (RBD) in libvirt
2021-09-01 10:35:54,698 DEBUG [kvm.resource.LibvirtConnection] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Looking for libvirtd connection 
at: qemu:///system
2021-09-01 10:35:54,699 WARN  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Storage pool 
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d was not found running in libvirt. Need to 
create it.
2021-09-01 10:35:54,699 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Didn't find an existing storage 
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d by UUID, checking for pools with 
duplicate paths
2021-09-01 10:35:54,699 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Checking path of existing pool 
84aa6a27-0413-39ad-87ca-5e08078b9b84 against pool we want to create
2021-09-01 10:35:54,701 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Checking path of existing pool 
3f5b0819-232c-45cf-b533-4780f4e0f540 against pool we want to create
2021-09-01 10:35:54,705 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Attempting to create storage 
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d
2021-09-01 10:35:54,705 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) <secret ephemeral='no' 
private='no'>
<uuid>fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d</uuid>
<usage type='ceph'>
<name>[email protected]:6789/cloudstack</name>
</usage>
</secret>

2021-09-01 10:35:54,709 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) <pool type='rbd'>
<name>fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d</name>
<uuid>fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d</uuid>
<source>
<host name='192.168.1.4' port='6789'/>
<name>cloudstack</name>
<auth username='cloudstack' type='ceph'>
<secret uuid='fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d'/>
</auth>
</source>
</pool>

2021-09-01 10:36:05,821 DEBUG [kvm.resource.LibvirtConnection] (Thread-58:null) 
(logid:) Looking for libvirtd connection at: qemu:///system
2021-09-01 10:36:05,824 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Found NFS storage pool 84aa6a27-0413-39ad-87ca-5e08078b9b84 in 
libvirt, continuing
2021-09-01 10:36:05,824 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Executing: 
/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 
192.168.1.149 -p /export/primary -m /mnt/84aa6a27-0413-39ad-87ca-5e08078b9b84 
-h 192.168.1.106
2021-09-01 10:36:05,825 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Executing while with timeout : 60000
2021-09-01 10:36:05,837 DEBUG [kvm.resource.KVMHAMonitor] (Thread-58:null) 
(logid:) Execution is successful.
2021-09-01 10:36:06,115 DEBUG [kvm.resource.LibvirtComputingResource] 
(UgentTask-5:null) (logid:) Executing: 
/usr/share/cloudstack-common/scripts/vm/network/security_group.py 
get_rule_logs_for_vms
2021-09-01 10:36:06,116 DEBUG [kvm.resource.LibvirtComputingResource] 
(UgentTask-5:null) (logid:) Executing while with timeout : 1800000
2021-09-01 10:36:06,277 DEBUG [kvm.resource.LibvirtComputingResource] 
(UgentTask-5:null) (logid:) Execution is successful.
2021-09-01 10:36:06,278 DEBUG [kvm.resource.LibvirtConnection] 
(UgentTask-5:null) (logid:) Looking for libvirtd connection at: qemu:///system
2021-09-01 10:36:06,300 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) 
Sending ping: Seq 1-57:  { Cmd , MgmtId: -1, via: 1, Ver: v1, Flags: 11, 
[{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"s-54-VM":{"state":"PowerOn","host":"virthost2"},"v-1-VM":{"state":"PowerOn","host":"virthost2"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"1","wait":"0","bypassHostMaintenance":"false"}}]
 }
2021-09-01 10:36:06,321 DEBUG [cloud.agent.Agent] (Agent-Handler-5:null) 
(logid:6b2e7694) Received response: Seq 1-57:  { Ans: , MgmtId: 8796751976908, 
via: 1, Ver: v1, Flags: 100010, 
[{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId":"1","wait":"0","bypassHostMaintenance":"false"},"result":"true","wait":"0","bypassHostMaintenance":"false"}}]
 }
2021-09-01 10:36:07,367 DEBUG [cloud.agent.Agent] (agentRequest-Handler-5:null) 
(logid:53534b87) Processing command: com.cloud.agent.api.GetHostStatsCommand
2021-09-01 10:36:09,300 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) 
(logid:2d85c92e) Processing command: com.cloud.agent.api.GetStorageStatsCommand
2021-09-01 10:36:09,300 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-1:null) (logid:2d85c92e) Trying to fetch storage pool 
84aa6a27-0413-39ad-87ca-5e08078b9b84 from libvirt
2021-09-01 10:36:09,300 DEBUG [kvm.resource.LibvirtConnection] 
(agentRequest-Handler-1:null) (logid:2d85c92e) Looking for libvirtd connection 
at: qemu:///system
2021-09-01 10:36:09,303 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-1:null) (logid:2d85c92e) Asking libvirt to refresh 
storage pool 84aa6a27-0413-39ad-87ca-5e08078b9b84
2021-09-01 10:36:09,321 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-1:null) (logid:2d85c92e) Succesfully refreshed pool 
84aa6a27-0413-39ad-87ca-5e08078b9b84 Capacity: (26.84 GB) 28818669568 Used: 
(3.49 GB) 3750887424 Available: (23.35 GB) 25067782144
2021-09-01 10:36:24,771 ERROR [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Failed to create RBD storage 
pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the 
pool 'cloudstack' exist?: No such file or directory
2021-09-01 10:36:24,771 ERROR [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) (logid:3c2b5d3a) Failed to create the RBD storage 
pool, cleaning up the libvirt secret

I get the same output when I use the IPs from the other monitor nodes. My test 
environment consists of VMs running with CentOS 8. 
Are there any other software requirements for the ceph admin/monitor node for 
connecting with CloudStack (e.g. QEMU)? On the Ceph admin node librbd is 
installed as well as librados.

[root@cephnode1 ceph]# dnf list installed | grep librbd
librbd1.x86_64                        2:16.2.5-0.el8                            
@Ceph

[root@cephnode1 ceph]# dnf list installed | grep librados
librados2.x86_64                      2:16.2.5-0.el8                            
@Ceph
libradosstriper1.x86_64               2:16.2.5-0.el8                            
@Ceph

Many thanks

Mevludin

On 2021/08/31 11:02:20, Mevludin Blazevic <[email protected]> wrote: 
> Hi all,
> 
> I am trying to add Ceph RBD (pacific) as a new Primary Storage for my fresh 
> Cloudstack 4.15.1 installation. I have currently an NFS server as Primary 
> storage running and after connecting Cloudstack with Ceph, I would then 
> remove the NFS server. Unfortunately, I am running into the same problem, no 
> matter If I am trying to add the Ceph storage Cluster or Zone-wide. The 
> output of /cloudstack/agent/agent.log is as follows:
> 
> 2021-08-31 12:43:44,247 INFO  [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-4:null) (logid:cb99bb9f) Asking libvirt to refresh 
> storage pool 84aa6a27-0413-39ad-87ca-5e08078b9b84
> 2021-08-31 12:44:40,699 INFO  [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-5:null) (logid:cae1fff8) Attempting to create storage 
> pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d (RBD) in libvirt
> 2021-08-31 12:44:40,701 WARN  [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-5:null) (logid:cae1fff8) Storage pool 
> fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d was not found running in libvirt. Need 
> to create it.
> 2021-08-31 12:44:40,701 INFO  [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-5:null) (logid:cae1fff8) Didn't find an existing 
> storage pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d by UUID, checking for pools 
> with duplicate paths
> 2021-08-31 12:44:44,286 INFO  [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-3:null) (logid:725f7dcf) Trying to fetch storage pool 
> 84aa6a27-0413-39ad-87ca-5e08078b9b84 from libvirt
> 2021-08-31 12:44:44,290 INFO  [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-3:null) (logid:725f7dcf) Asking libvirt to refresh 
> storage pool 84aa6a27-0413-39ad-87ca-5e08078b9b84
> 2021-08-31 12:45:10,780 ERROR [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-5:null) (logid:cae1fff8) Failed to create RBD storage 
> pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the 
> pool 'cloudstack' exist?: No such file or directory
> 2021-08-31 12:45:10,780 ERROR [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-5:null) (logid:cae1fff8) Failed to create the RBD 
> storage pool, cleaning up the libvirt secret
> 2021-08-31 12:45:10,781 WARN  [cloud.agent.Agent] 
> (agentRequest-Handler-5:null) (logid:cae1fff8) Caught:
> com.cloud.utils.exception.CloudRuntimeException: Failed to create storage 
> pool: fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d
>         at 
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
>         at 
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
>         at 
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
>         at 
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
>         at 
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
>         at 
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
>         at 
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1646)
>         at com.cloud.agent.Agent.processRequest(Agent.java:661)
>         at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
>         at com.cloud.utils.nio.Task.call(Task.java:83)
>         at com.cloud.utils.nio.Task.call(Task.java:29)
>         at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 
> Details of my setup:
> - Ceph pacific is installed and configured on a test environment. Cluster 
> health is ok. 
> - rbd pool and user created as described in the Ceph doc: 
> https://docs.ceph.com/en/pacific/rbd/rbd-cloudstack/?highlight=cloudstack
> - The IP of my Ceph Mon with the rbd pool is 192.168.1.4, the firewall is 
> disabled there
> - I have also tried to copy the keyring and ceph.conf from the monitor node 
> to the kvm machine (In the test environment I have only one kvm host), still 
> the same problem
> 
> Do you have any ideas how to resolve the problem?
> 
> Cheers,
> 
> Mevludin
> 

Reply via email to