[ceph-users] Re: High IO utilization for bstore_kv_sync

2024-02-22 Thread Work Ceph
e seeing time spent waiting on fdatsync in > bstore_kv_sync if the drives you are using don't have power loss > protection and can't perform flushes quickly. Some consumer grade > drives are actually slower at this than HDDs. > > > Mark > > > On 2/22/24 11:04, Work Ceph w

[ceph-users] High IO utilization for bstore_kv_sync

2024-02-22 Thread Work Ceph
Hello guys, We are running Ceph Octopus on Ubuntu 18.04, and we are noticing spikes of IO utilization for bstore_kv_sync thread during processes such as adding a new pool and increasing/reducing the number of PGs in a pool. It is funny though that the IO utilization (reported with IOTOP) is

[ceph-users] Re: What does 'removed_snaps_queue' [d5~3] means?

2023-08-27 Thread Work Ceph
list [1]. > > Regards, > Eugen > > [1] > > https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ZEMGKBLMEREBZB7SWOLDA6QZX3S7FLL3/#YAHVTTES6YU5IXZJ2UNXKURXSHM5HDEX > > Zitat von Work Ceph : > > > Hello guys, > > We are facing/seeing an unexpected mar

[ceph-users] What does 'removed_snaps_queue' [d5~3] means?

2023-08-25 Thread Work Ceph
Hello guys, We are facing/seeing an unexpected mark in one of our pools. Do you guys know what does "removed_snaps_queue" it mean? We see some notation such as "d5~3" after this tag. What does it mean? We tried to look into the docs, but could not find anything meaningful. We are running Ceph

[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-07-04 Thread Work Ceph
, 2023 at 12:31 PM Work Ceph wrote: > Thanks for the help so far guys! > > Has anybody used (made it work) the default ceph-iscsi implementation with > VMware and/or Windows CSV storage system with a single target/portal in > iSCSI? > > On Wed, Jun 21, 2023 at 6:02 AM M

[ceph-users] Re: Ceph iSCSI GW is too slow when compared with Raw RBD performance

2023-07-04 Thread Work Ceph
Thank you all guys that tried to help here. We discovered the issue, and it had nothing to do with Ceph or iSCSI GW. The issue was being caused by a Switch that was acting as the "router" for the network of the iSCSI GW. All end clients (applications) were separated into different VLANs, and

[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-23 Thread Work Ceph
Thanks for the help so far guys! Has anybody used (made it work) the default ceph-iscsi implementation with VMware and/or Windows CSV storage system with a single target/portal in iSCSI? On Wed, Jun 21, 2023 at 6:02 AM Maged Mokhtar wrote: > > On 20/06/2023 01:16, Work Ceph wrote: >

[ceph-users] Re: Ceph iSCSI GW is too slow when compared with Raw RBD performance

2023-06-23 Thread Work Ceph
inct difference. > > > > On Jun 23, 2023, at 09:33, Work Ceph > wrote: > > Great question! > > Yes, one of the slowness was detected in a Veeam setup. Have you > experienced that before? > > On Fri, Jun 23, 2023 at 10:32 AM Anthony D'Atri > wrote: > >&

[ceph-users] Ceph iSCSI GW is too slow when compared with Raw RBD performance

2023-06-22 Thread Work Ceph
Hello guys, We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows clients. We started noticing some unexpected performance issues with iSCSI. I mean, an SSD pool is reaching 100MB of write speed for an

[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-19 Thread Work Ceph
rmance > implementation. We currently use Ceph 17.2.5 > > > On 19/06/2023 14:47, Work Ceph wrote: > > Hello guys, > > > > We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD > > for some workloads, RadosGW (via S3) for others, and iSCSI for some

[ceph-users] Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-19 Thread Work Ceph
Hello guys, We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows clients. Recently, we had the need to add some VMWare clusters as clients for the iSCSI GW and also Windows systems with the use of

[ceph-users] RBD imagem mirroring doubt

2023-05-30 Thread Work Ceph
Hello guys, What would happen if we set up an RBD mirroring configuration, and in the target system (the system where the RBD image is mirrored) we create snapshots of this image? Would that cause some problems? Also, what happens if we delete the source RBD image? Would that trigger a deletion

[ceph-users] Re: Protected snapshot deleted when RBD image is deleted

2023-05-10 Thread Work Ceph
spec > Unprotect a snapshot from deletion (undo snap protect). If > cloned children remain, snap unprotect fails. (Note that clones may > exist in different pools than the parent snapshot.) > > Regards > > Reto > > > Am Mi., 10. Mai 2023 um 20:58 Uhr

[ceph-users] Protected snapshot deleted when RBD image is deleted

2023-05-10 Thread Work Ceph
Hello guys, We have a doubt regarding snapshot management, when a protected snapshot is created, should it be deleted when its RBD image is removed from the system? If not, how can we list orphaned snapshots in a pool? ___ ceph-users mailing list --

[ceph-users] Restrict user to an RBD image in a pool

2023-04-14 Thread Work Ceph
Hello guys! Is it possible to restrict user access to a single image in an RBD pool? I know that I can use namespaces, so users can only see images with a given namespaces. However, these users will still be able to create new RBD images. Is it possible to somehow block users from creating RBD

[ceph-users] Re: Live migrate RBD image with a client using it

2023-04-12 Thread Work Ceph
te, the clients can be restarted using the new > > target image name. Attempting to restart the clients using the > > source image name will result in failure. > > So I don't think you can live-migrate without interruption, at least > not at the moment. > > Regards,

[ceph-users] Live migrate RBD image with a client using it

2023-04-12 Thread Work Ceph
Hello guys, We have been reading the docs, and trying to reproduce that process in our Ceph cluster. However, we always receive the following message: ``` librbd::Migration: prepare: image has watchers - not migrating rbd: preparing migration failed: (16) Device or resource busy ``` We

[ceph-users] Re: Recently deployed cluster showing 9Tb of raw usage without any load deployed

2023-04-04 Thread Work Ceph
uld this be a trick? > > If not - please share "ceph osd df tree" output? > > > On 4/4/2023 2:18 PM, Work Ceph wrote: > > Thank you guys for your replies. The "used space" there is exactly that. > It is the accounting for Rocks.DB and WAL. > ``` > >

[ceph-users] Re: Recently deployed cluster showing 9Tb of raw usage without any load deployed

2023-04-04 Thread Work Ceph
r OSD? > > > > If so then highly likely RAW usage is that high due to DB volumes > > space is considered as in-use one already. > > > > Could you please share "ceph osd df tree" output to prove that? > > > > > > Thanks, > > > > Igor

[ceph-users] Re: Recently deployed cluster showing 9Tb of raw usage without any load deployed

2023-04-03 Thread Work Ceph
32 0 B0 0 B 0115 TiB rbd 6 32 0 B0 0 B 0115 TiB ``` On Mon, Apr 3, 2023 at 10:25 PM Work Ceph wrote: > Hello guys! > > > We noticed an unexpected situation. In a recently deployed Ceph cluster we > are see

[ceph-users] Recently deployed cluster showing 9Tb of raw usage without any load deployed

2023-04-03 Thread Work Ceph
Hello guys! We noticed an unexpected situation. In a recently deployed Ceph cluster we are seeing a raw usage, that is a bit odd. We have the following setup: We have a new cluster with 5 nodes with the following setup: - 128 GB of RAM - 2 cpus Intel(R) Intel Xeon Silver 4210R - 1