[ceph-users] ceph fs perf stats output is empty

2023-06-09 Thread Denis Polom
Hi I'm running latest Ceph Pacific 16.2.13 with Cephfs. I need to collect performance stats per client, but getting empty list without any numbers I even run dd on client against mounted ceph fs, but output is only like this: #> ceph fs perf stats 0 4638 192.168.121.1 {"version": 2,

[ceph-users] Re: Ceph drain not removing daemons

2023-06-09 Thread Jeremy Hansen
Figured out how to cleanly relocate daemons via the interface. All is good. -jeremy > On Friday, Jun 09, 2023 at 2:04 PM, Me (mailto:jer...@skidrow.la)> wrote: > I’m doing a drain on a host using cephadm, Pacific, 16.2.11. > > ceph orch host drain > > removed all the OSDs, but these daemons

[ceph-users] Ceph drain not removing daemons

2023-06-09 Thread Jeremy Hansen
I’m doing a drain on a host using cephadm, Pacific, 16.2.11. ceph orch host drain removed all the OSDs, but these daemons remain: grafana.cn06 cn06.ceph.la1 *:3000 stopped 5m ago 18M - - mds.btc.cn06.euxhdu cn06.ceph.la1 running (2d) 5m ago 17M 29.4M - 16.2.11 de4b0b384ad4 017f7ef441ff

[ceph-users] Re: bucket notification retries

2023-06-09 Thread Stefan Reuter
Hi Yuval, Thanks for having a look at bucket notifications and collecting feedback. I also see potential for improvement in the area of bucket notifications. We have observed issues in a setup with Rabbit MQ as a broker where the RADOS queue seems to fill up and cients receive "slow down"

[ceph-users] Re: Disks are filling up

2023-06-09 Thread Omar Siam
TL;DR: We could not fix this problem in the end and ended up with a Ceph fs in read only mode (so we could only backup, delete and restore) and one broken OSD (we deleted that and restored to a "new disk") I can now wrap up my whole experience with this problem. After the OSD usage growing to

[ceph-users] Re: ceph Pacific - MDS activity freezes when one the MDSs is restarted

2023-06-09 Thread Emmanuel Jaep
Hi Eugen, thanks for the response! :-) We have (kind of) solved the problem immediately at hand. The whole process was stuck because the MDSes were actually getting 'killed'. In fact, the amount of RAM we allocated to the MDSes was insufficient to accommodate the logs' complete replay. Therefore,

[ceph-users] CreateMultipartUpload and Canned ACL - bucket-owner-full-control

2023-06-09 Thread Rasool Almasi
A bucket with a policy that enforces "bucket-owner-full-control" results in Access Denied if multipart is used to upload the object. It is also discussed in an awscli issue: https://github.com/aws/aws-cli/issues/1674 aws client exits with "An error occurred (AccessDenied) when calling the

[ceph-users] RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-09 Thread Christian Theune
Hi, we are running a cluster that has been alive for a long time and we tread carefully regarding updates. We are still a bit lagging and our cluster (that started around Firefly) is currently at Nautilus. We’re updating and we know we’re still behind, but we do keep running into challenges

[ceph-users] Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

2023-06-09 Thread Janek Bevendorff
Hi Patrick, I'm afraid your ceph-post-file logs were lost to the nether. AFAICT, our ceph-post-file storage has been non-functional since the beginning of the lab outage last year. We're looking into it. I have it here still. Any other way I can send it to you? Extremely unlikely. Okay,