The port TCP/3300 was block on my firewall. As this is my first time using
Ceph, I thought only port TCP/6789 is used for monitors. Thanks Jayanth for
pointing that out.

Also thanks to Wido for clarifying about the comma. Great work from you for
the RBD integration to CloudStack.

I suppose this is resolved. Thanks to all.

On Thu, 8 Aug 2024, 12:42 Wido den Hollander, <w...@widodh.nl> wrote:

>
>
> Op 08/08/2024 om 02:27 schreef Muhammad Hanis Irfan Mohd Zaid:
> > I'm running Ceph 18.2.4 reef (stable).
> >
> > Can you kindly share any reference on directly adding the pool to KVM?
> I'm
> > not really experienced in KVM and Ceph, just experimenting right now.
> Yep,
> > the KVM host can ping and telnet to the Ceph MONs port 6789.
> >
> > Here's the pool and user list from Ceph:
> >
> > # ceph osd pool ls detail
> > pool 2 'acs_primary_1' replicated size 3 min_size 2 crush_rule 0
> > object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 183
> > lfor 0/0/101 flags hashpspool,selfmanaged_snaps max_bytes 5497558138880
> > stripe_width 0 application rbd read_balance_score 2.19
> >
> > # ceph auth ls
> > client.cloudstack
> >          key: <REDACTED>
> >          caps: [mon] profile rbd
> >          caps: [osd] profile rbd pool=acs_primary_1
>
> Double checked the poolname you provide to CloudStack? This is
> 'acs_primary_1' and not 'rbd' my any chance?
>
> >
> > I've tried adding the pool back and looks like the same error persisted:
> >
> > 2024-08-08 08:14:45,462 INFO  [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-4:null) (logid:9a5c88d1) Attempting to create
> storage
> > pool e  b5ec036-c08a-3d3d-996d-40968077d391 (RBD) in libvirt
> > 2024-08-08 08:14:45,482 WARN  [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-4:null) (logid:9a5c88d1) Storage pool
> > eb5ec036-c08a-3d3d-996  d-40968077d391 was not found running in libvirt.
> > Need to create it.
> > 2024-08-08 08:14:45,482 INFO  [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-4:null) (logid:9a5c88d1) Didn't find an existing
> > storage poo  l eb5ec036-c08a-3d3d-996d-40968077d391 by UUID, checking for
> > pools with duplicate paths
> > 2024-08-08 08:19:45,521 ERROR [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-4:null) (logid:9a5c88d1) Failed to create RBD
> storage
> > pool: org.libvirt.LibvirtException: failed to connect to the RADOS
> monitor
> > on: 10.0.32.71,10.0.32.72,10.0.32.73,10.0.32.74,10.0.32.75,: No such file
> > or directory
> > 2024-08-08 08:19:45,522 ERROR [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-4:null) (logid:9a5c88d1) Failed to create the RBD
> > storage pool, cleaning up the libvirt secret
> > 2024-08-08 08:19:45,523 WARN  [cloud.agent.Agent]
> > (agentRequest-Handler-4:null) (logid:9a5c88d1) Caught:
> > com.cloud.utils.exception.CloudRuntimeException: Failed to create storage
> > pool: eb5ec036-c08a-3d3d-996d-40968077d391
> >          at
> >
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:743)
> >          at
> >
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:364)
> >          at
> >
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:358)
> >          at
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
> >          at
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
> >          at
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
> >          at
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1929)
> >          at com.cloud.agent.Agent.processRequest(Agent.java:683)
> >          at
> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1106)
> >          at com.cloud.utils.nio.Task.call(Task.java:83)
> >          at com.cloud.utils.nio.Task.call(Task.java:29)
> >          at
> > java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> >          at
> >
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> >          at
> >
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> >          at java.base/java.lang.Thread.run(Thread.java:829)
> >
> > Is the comma (,) at the end of the monitor list in this log supposed to
> be
> > there? I can assure you that I don't add any comma in the UI at the end,
> > when adding the storage pool.
> >
>
> That comma is normal! (I wrote that piece of code)
>
> Wido
>
> > https://imgur.com/a/eN45YWa
> >
> > Thanks.
> >
> > On Wed, 7 Aug 2024 at 20:39, Rohit Yadav <rohit.ya...@shapeblue.com>
> wrote:
> >
> >> Based on the logs, the error is due to some kind of rbd pool
> >> configuration. Can you try to add rbd pool on the kvm directly, see if
> it
> >> works? You can also try to check if from the KVM host you can reach your
> >> ceph nodes/mons?
> >>
> >> Regards.
> >>
> >> Regards.
> >>
> >>
> >>
> >>
> >>
> >> ------------------------------
> >> *From:* Wei ZHOU <ustcweiz...@gmail.com>
> >> *Sent:* Wednesday, August 7, 2024 5:45:34 PM
> >> *To:* Muhammad Hanis Irfan Mohd Zaid <hanisirfan.w...@gmail.com>
> >> *Cc:* users@cloudstack.apache.org <users@cloudstack.apache.org>
> >> *Subject:* Re: Unable to add Ceph RBD for primary storage (No such file
> >> or directory)
> >>
> >> Hi,
> >>
> >> I just tested adding ceph pool on alma9 (it has the same
> >> package/version installed), it worked
> >>
> >> What's the ceph version ?
> >>
> >> -Wei
> >>
> >> On Wed, Aug 7, 2024 at 12:11 PM Muhammad Hanis Irfan Mohd Zaid
> >> <hanisirfan.w...@gmail.com> wrote:
> >>>
> >>> I also noticed that comma in the end. I just took a blind eye and
> >> expected it to be as designed. I don't enter any comma in the end:
> >> https://ibb.co/N3zMVvc
> >>>
> >>> Yep, the package is already installed.
> >>>
> >>> # dnf -y install libvirt-daemon-driver-storage-rbd
> >>> Last metadata expiration check: 1:29:57 ago on Wed 07 Aug 2024 04:37:02
> >> PM +08.
> >>> Package libvirt-daemon-driver-storage-rbd-10.0.0-6.6.el9_4.x86_64 is
> >> already installed.
> >>> Dependencies resolved.
> >>> Nothing to do.
> >>> Complete!
> >>>
> >>> I'm running the KVM host in Rocky Linux 9.4 (Blue Onyx) with CloudStack
> >> 4.19.1.1
> >>>
> >>>
> >>>
> >>> On Wed, 7 Aug 2024 at 18:02, Wei ZHOU <ustcweiz...@gmail.com> wrote:
> >>>>
> >>>> Hi,
> >>>>
> >>>> There is a comma (,) after 10.0.32.75 , was it a mistake ?
> >>>>
> >>>> org.libvirt.LibvirtException: failed to connect to the RADOS monitor
> >>>> on: 10.0.32.71,10.0.32.72,10.0.32.73,10.0.32.74,10.0.32.75,: No such
> >> file
> >>>> or directory
> >>>>
> >>>> Have you installed the package "libvirt-daemon-driver-storage-rbd" on
> >>>> the kvm host ?
> >>>>
> >>>> -Wei
> >>>>
> >>>> On Wed, Aug 7, 2024 at 11:27 AM Muhammad Hanis Irfan Mohd Zaid
> >>>> <hanisirfan.w...@gmail.com> wrote:
> >>>>>
> >>>>> I'm trying to add a Ceph RBD pool for primary storage use. I've 5
> >> Ceph MONs
> >>>>> in my POC lab. Ping and telnet to all the Ceph MONs with port 6789
> >> works.
> >>>>>
> >>>>> I'm following the steps from this:
> >>>>> - https://docs.ceph.com/en/reef/rbd/rbd-cloudstack/
> >>>>> - https://rohityadav.cloud/blog/ceph/
> >>>>>
> >>>>> Agent log when specifying 5 monitors:
> >>>>> 2024-08-07 17:12:34,691 INFO  [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-3:null) (logid:db5277f2) Attempting to create
> >> storage
> >>>>> pool eb5ec036-c08a-3d3d-996d-40968077d391 (RBD) in libvirt
> >>>>> 2024-08-07 17:12:34,706 WARN  [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-3:null) (logid:db5277f2) Storage pool
> >>>>> eb5ec036-c08a-3d3d-996d-40968077d391 was not found running in
> >> libvirt. Need
> >>>>> to create it.
> >>>>> 2024-08-07 17:12:34,706 INFO  [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-3:null) (logid:db5277f2) Didn't find an
> existing
> >>>>> storage pool eb5ec036-c08a-3d3d-996d-40968077d391 by UUID, checking
> >> for
> >>>>> pools with duplicate paths
> >>>>> 2024-08-07 17:17:34,738 ERROR [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-3:null) (logid:db5277f2) Failed to create RBD
> >> storage
> >>>>> pool: org.libvirt.LibvirtException: failed to connect to the RADOS
> >> monitor
> >>>>> on: 10.0.32.71,10.0.32.72,10.0.32.73,10.0.32.74,10.0.32.75,: No such
> >> file
> >>>>> or directory
> >>>>> 2024-08-07 17:17:34,739 ERROR [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-3:null) (logid:db5277f2) Failed to create the
> >> RBD
> >>>>> storage pool, cleaning up the libvirt secret
> >>>>> 2024-08-07 17:17:34,739 WARN  [cloud.agent.Agent]
> >>>>> (agentRequest-Handler-3:null) (logid:db5277f2) Caught:
> >>>>> com.cloud.utils.exception.CloudRuntimeException: Failed to create
> >> storage
> >>>>> pool: eb5ec036-c08a-3d3d-996d-40968077d391
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:743)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:364)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:358)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1929)
> >>>>> at com.cloud.agent.Agent.processRequest(Agent.java:683)
> >>>>> at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1106)
> >>>>> at com.cloud.utils.nio.Task.call(Task.java:83)
> >>>>> at com.cloud.utils.nio.Task.call(Task.java:29)
> >>>>> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> >>>>> at
> >>>>>
> >>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> >>>>> at
> >>>>>
> >>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> >>>>> at java.base/java.lang.Thread.run(Thread.java:829)
> >>>>>
> >>>>> Agent log when specifying 1 monitors:
> >>>>> 2024-08-07 17:06:09,791 INFO  [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-2:null) (logid:c790784b) Attempting to create
> >> storage
> >>>>> pool 69b2f6e0-12c8-31a3-bdc6-71b3a1e265f2 (RBD) in libvirt
> >>>>> 2024-08-07 17:06:09,806 WARN  [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-2:null) (logid:c790784b) Storage pool
> >>>>> 69b2f6e0-12c8-31a3-bdc6-71b3a1e265f2 was not found running in
> >> libvirt. Need
> >>>>> to create it.
> >>>>> 2024-08-07 17:06:09,806 INFO  [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-2:null) (logid:c790784b) Didn't find an
> existing
> >>>>> storage pool 69b2f6e0-12c8-31a3-bdc6-71b3a1e265f2 by UUID, checking
> >> for
> >>>>> pools with duplicate paths
> >>>>> 2024-08-07 17:11:09,840 ERROR [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-2:null) (logid:c790784b) Failed to create RBD
> >> storage
> >>>>> pool: org.libvirt.LibvirtException: failed to connect to the RADOS
> >> monitor
> >>>>> on: 10.0.32.71,: No such file or directory
> >>>>> 2024-08-07 17:11:09,840 ERROR [kvm.storage.LibvirtStorageAdaptor]
> >>>>> (agentRequest-Handler-2:null) (logid:c790784b) Failed to create the
> >> RBD
> >>>>> storage pool, cleaning up the libvirt secret
> >>>>> 2024-08-07 17:11:09,841 WARN  [cloud.agent.Agent]
> >>>>> (agentRequest-Handler-2:null) (logid:c790784b) Caught:
> >>>>> com.cloud.utils.exception.CloudRuntimeException: Failed to create
> >> storage
> >>>>> pool: 69b2f6e0-12c8-31a3-bdc6-71b3a1e265f2
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:743)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:364)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:358)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
> >>>>> at
> >>>>>
> >>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1929)
> >>>>> at com.cloud.agent.Agent.processRequest(Agent.java:683)
> >>>>> at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1106)
> >>>>> at com.cloud.utils.nio.Task.call(Task.java:83)
> >>>>> at com.cloud.utils.nio.Task.call(Task.java:29)
> >>>>> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> >>>>> at
> >>>>>
> >>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> >>>>> at
> >>>>>
> >>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> >>>>> at java.base/java.lang.Thread.run(Thread.java:829)
> >>>>>
> >>>>> Ran these commands on my Ceph node but still no luck. I refer this:
> >>>>> - https://github.com/apache/cloudstack/issues/5741
> >>>>>
> >>>>> ceph config set mon auth_expose_insecure_global_id_reclaim false
> >>>>> ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed
> >> false
> >>>>> ceph config set mon auth_allow_insecure_global_id_reclaim false
> >>>>> ceph orch restart mon
> >>>>>
> >>>>> Thanks for the help :)
> >>
> >
>

Reply via email to