[ceph-users] Re: ceph fs mv does copy, not move

2021-06-23 Thread Patrick Donnelly
Hello Frank, On Tue, Jun 22, 2021 at 2:16 AM Frank Schilder wrote: > > Dear all, > > some time ago I reported that the kernel client resorts to a copy instead of > move when moving a file across quota domains. I was told that the fuse client > does not have this problem. If enough space is

[ceph-users] Re: In "ceph health detail", what's the diff between MDS_SLOW_METADATA_IO and MDS_SLOW_REQUEST?

2021-06-23 Thread Patrick Donnelly
On Mon, Jun 21, 2021 at 8:13 PM opengers wrote: > > Thanks for the answer, I still have some confusion when I see the explanation > of "MDS_SLOW_REQUEST" from the document , as follows > -- > MDS_SLOW_REQUEST > > Message > “N slow requests are blocked” > > Description > One or more client

[ceph-users] How to stop a rbd migration and recover

2021-06-23 Thread Gilles Mocellin
Hello, As a follow-up of the thread "RBD migration between 2 EC pools : very slow". I'm running Octopus 15.2.13. RBD migration seems really fragile. I started a migration to change the data pool (from an EC 3+2 to an EC 8+2) : - rbd migration prepare - rbd migration execute => 4% after 6h, and

[ceph-users] Re: Octopus 15.2.8 slow ops causing inactive PGs upon disk replacement

2021-06-23 Thread Justin Goetz
Dan, Thank you for the suggestion. Changing osd_max_pg_per_osd_hard_ratio to 10 and also setting mon_max_pg_per_osd to 500 allowed me to resume IO (I did have to restart the OSDs with stuck slow ops). I'll have to do some reading into why our PG count appears so high, and if it's safe to

[ceph-users] Re: How can I check my rgw quota ? [EXT]

2021-06-23 Thread Konstantin Shalygin
Or you can use radosgw_usage_exporter [1] and provide some graphs to end users [1] https://github.com/blemmenes/radosgw_usage_exporter k Sent from my iPhone > On 23 Jun 2021, at 11:59, Matthew Vernon wrote: > > > I think you can't via S3; we collect these data and publish them out-of-band

[ceph-users] Re: RGW topic created in wrong (default) tenant

2021-06-23 Thread Daniel Iwan
> > this looks like a bug, the topic should be created in the right tenant. > please submit a tracker for that. > Thank you for confirming. Created here https://tracker.ceph.com/issues/51331 > yes. topics are owned by the tenant. previously, they were owned by the > user but since the same

[ceph-users] pacific installation at ubuntu 20.04

2021-06-23 Thread Jana Markwort
Hi all, I'm a new ceph user and try to install my first cluster. I try to install pacific but as result I get octopus. What's wrong here? I've done: # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm # chmod +x cephadm # ./cephadm add-repo

[ceph-users] Re: Octopus 15.2.8 slow ops causing inactive PGs upon disk replacement

2021-06-23 Thread Dan van der Ster
Hi, Stuck activating could be an old known issue: if the cluster has many (>100) PGs per OSD, they may temporarily need to hold more than the max (300) and therefore PGs get stuck activating. We always use this option as a workaround: osd max pg per osd hard ratio = 10.0 I suggest giving

[ceph-users] Re: when is krbd on osd nodes starting to get problematic?

2021-06-23 Thread Ilya Dryomov
On Wed, Jun 23, 2021 at 3:36 PM Marc wrote: > > From what kernel / ceph version is krbd usage on a osd node problematic? > > Currently I am running Nautilus 14.2.11 and el7 3.10 kernel without any > issues. > > I can remember using a cephfs mount without any issues as well, until some >

[ceph-users] Re: RBD migration between 2 EC pools : very slow

2021-06-23 Thread Gilles Mocellin
Le 2021-06-23 14:51, Alexander E. Patrakov a écrit : вт, 22 июн. 2021 г. в 23:22, Gilles Mocellin : Hello Cephers, On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm migrating a 40 To image from a 3+2 EC pool to a 8+2 one. The use case is Veeam backup on XFS filesystems,

[ceph-users] Octopus 15.2.8 slow ops causing inactive PGs upon disk replacement

2021-06-23 Thread Justin Goetz
Hello! We are in the process of expanding our CEPH cluster (by both adding OSD hosts and replacing smaller-sized HDDs on our existing hosts). So far we have gone host by host, removing the old OSDs, swapping the physical HDDs, and re-adding them. This process has gone smooth, aside from one

[ceph-users] Re: RGW topic created in wrong (default) tenant

2021-06-23 Thread Yuval Lifshitz
On Wed, Jun 23, 2021 at 2:21 PM Daniel Iwan wrote: > Hi > > I'm using Ceph Pacific 16.2.1 > > I'm creating a topic as a user which belongs to a non-default tenant. > I'm using AWS CLI 2 with v3 authentication enabled > > aws --profile=ceph-myprofile --endpoint=$HOST_S3_API --region="" sns >

[ceph-users] when is krbd on osd nodes starting to get problematic?

2021-06-23 Thread Marc
>From what kernel / ceph version is krbd usage on a osd node problematic? Currently I am running Nautilus 14.2.11 and el7 3.10 kernel without any issues. I can remember using a cephfs mount without any issues as well, until some specific luminous update surprised me. So maybe nice to know when

[ceph-users] Re: RBD migration between 2 EC pools : very slow

2021-06-23 Thread Alexander E. Patrakov
вт, 22 июн. 2021 г. в 23:22, Gilles Mocellin : > > Hello Cephers, > > > On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm migrating a 40 > To image from a 3+2 EC pool to a 8+2 one. > > The use case is Veeam backup on XFS filesystems, mounted via KRBD. > > > Backups are running, and I

[ceph-users] RGW topic created in wrong (default) tenant

2021-06-23 Thread Daniel Iwan
Hi I'm using Ceph Pacific 16.2.1 I'm creating a topic as a user which belongs to a non-default tenant. I'm using AWS CLI 2 with v3 authentication enabled aws --profile=ceph-myprofile --endpoint=$HOST_S3_API --region="" sns create-topic --name=fishtopic --attributes='{"push-endpoint": "

[ceph-users] Ceph rbd-nbd performance benchmark

2021-06-23 Thread Bobby
Hi, I am trying to benchmark the Ceph rbd-nbd performance. Are there any authentic existing benchmark results of rbd-nbd for comparison? BR Bobby ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Can not mount rbd device anymore

2021-06-23 Thread Ilya Dryomov
On Wed, Jun 23, 2021 at 9:59 AM Matthias Ferdinand wrote: > > On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote: > > Hello List, > > > > oversudden i can not mount a specific rbd device anymore: > > > > root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k > >

[ceph-users] Re: Create and listing topics with AWS4 fails

2021-06-23 Thread Daniel Iwan
Hi Yuval Thank you very much for the link This gave me some useful info from https://github.com/ceph/ceph/tree/master/examples/boto3#aws-cli Regards Daniel On Tue, 22 Jun 2021 at 18:34, Yuval Lifshitz wrote: > Hi Daniel, > You are correct, currently, only v2 auth is supported for topic

[ceph-users] Re: How can I check my rgw quota ? [EXT]

2021-06-23 Thread Matthew Vernon
On 22/06/2021 12:58, Massimo Sgaravatto wrote: Sorry for the very naive question: I know how to set/check the rgw quota for a user (using radosgw-admin) But how can a radosgw user check what is the quota assigned to his/her account , using the S3 and/or the swift interface ? I think you

[ceph-users] Re: RBD migration between 2 EC pools : very slow

2021-06-23 Thread Gilles Mocellin
Le 2021-06-22 20:21, Gilles Mocellin a écrit : Hello Cephers, On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm migrating a 40 To image from a 3+2 EC pool to a 8+2 one. The use case is Veeam backup on XFS filesystems, mounted via KRBD. Backups are running, and I can see

[ceph-users] Re: Can not mount rbd device anymore

2021-06-23 Thread Matthias Ferdinand
On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote: > Hello List, > > oversudden i can not mount a specific rbd device anymore: > > root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k > /etc/ceph/ceph.client.admin.keyring > /dev/rbd0 > > root@proxmox-backup:~# mount /dev/rbd0