[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
the pgp_num reduce quickly but pg_num is still slowly. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
Thanks, I will take a look. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
Thanks, other question is how to know where this option is set, mon or mgr? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

2023-06-08 Thread Patrick Donnelly
On Mon, Jun 5, 2023 at 11:48 AM Janek Bevendorff wrote: > > Hi Patrick, hi Dan! > > I got the MDS back and I think the issue is connected to the "newly > corrupt dentry" bug [1]. Even though I couldn't see any particular > reason for the SIGABRT at first, I then noticed one of these awfully >

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Eugen Block
Sure: https://docs.ceph.com/en/latest/rados/operations/balancer/#throttling Zitat von Louis Koo : ok, I will try it. Could you show me the archive doc? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: rbd ls failed with operation not permitted

2023-06-08 Thread Konstantin Shalygin
Hi, > On 7 Jun 2023, at 14:39, zyz wrote: > > When set the user's auth and then ls namespace, it is ok. > > > But when I set the user's auth with namespace, ls namespace returns with > error, but why? Because data with namespaces in "without namespace" space k

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Konstantin Shalygin
Hi, > On 7 Jun 2023, at 10:02, Louis Koo wrote: > > I had set it from 0.05 to 1 with "ceph config set mon > target_max_misplaced_ratio 1.0", it's still invalid. Because is setting for a mgr, not for mon, try `ceph config set mgr target_max_misplaced_ratio 1` Cheers, k

[ceph-users] Re: [RGW] what is log_meta and log_data config in a multisite config?

2023-06-08 Thread Gilles Mocellin
Hi Richard, Thank you, that's what I thought, I've also seen that doc. But so, I imagine that log_meta is false on secondary zones because metadata requests are forwarded to master zone, no need to sync. Regards, -- Gilles Le jeudi 8 juin 2023, 03:15:56 CEST Richard Bade a écrit : > Hi Gilles,

[ceph-users] rbd ls failed with operation not permitted

2023-06-08 Thread zyz
When set the user's auth and then ls namespace, it is ok. But when I set the user's auth with namespace, ls namespace returns with error, but why?___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
I had set it from 0.05 to 1 with "ceph config set mon target_max_misplaced_ratio 1.0", it's still invalid. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
ok, I will try it. Could you show me the archive doc? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
ceph df detail: [root@k8s-1 ~]# ceph df detail --- RAW STORAGE --- CLASS SIZEAVAIL USED RAW USED %RAW USED hdd600 GiB 600 GiB 157 MiB 157 MiB 0.03 TOTAL 600 GiB 600 GiB 157 MiB 157 MiB 0.03 --- POOLS --- POOLID PGS

[ceph-users] S3 and Omap

2023-06-08 Thread xadhoom76
Hi, we have a ceph 17.2.6 with ragosgw and a couple of buckets in it. We use it for backup with lock directly from veeam. After few backups we got HEALTH_WARN 2 large omap objects

[ceph-users] Re: Question about xattr and subvolumes

2023-06-08 Thread Dario Graña
Thank you for the answer, that's what I was looking for! On Wed, Jun 7, 2023 at 7:59 AM Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > > > On Tue, Jun 6, 2023 at 4:30 PM Dario Graña wrote: > >> Hi, >> >> I'm installing a new instance (my first) of Ceph. Our cluster runs >>

[ceph-users] keep rbd command history ever executed

2023-06-08 Thread huxia...@horebdata.cn
Dear Ceph folks, In a Ceph cluster there could be multiple points (e.g. librbd clients) being able to execute rbd commands. My question is that , is there a methold to reliably record or keep a full rbd command history that ever being executed? This would be helpful for auditors as well as

[ceph-users] Operations: cannot update immutable features

2023-06-08 Thread Adam Boyhan
I have a small cluster on Pacific with roughly 600 RBD images. Out of those 600 images I have 2 which are in a somewhat odd state. root@cephmon:~# rbd info Cloud-Ceph1/vm-134-disk-0 rbd image 'vm-134-disk-0': size 1000 GiB in 256000 objects order 22 (4 MiB objects)

[ceph-users] Re: RadosGW S3 API Multi-Tenancy

2023-06-08 Thread Brad House
Curious if anyone had any guidance on this question... On 4/29/23 7:47 AM, Brad House wrote: I'm in the process of exploring if it is worthwhile to add RadosGW to our existing ceph cluster.  We've had a few internal requests for exposing the S3 API for some of our business units, right now we

[ceph-users] Re: 16.2.13: ERROR:ceph-crash:directory /var/lib/ceph/crash/posted does not exist; please create

2023-06-08 Thread Eugen Block
Hi, I wonder if a redeploy of the crash service would fix that, did you try that? Zitat von Zakhar Kirpichenko : I've opened a bug report https://tracker.ceph.com/issues/61589, which unfortunately received no attention. I fixed the issue by manually setting directory ownership for

[ceph-users] Re: ceph Pacific - MDS activity freezes when one the MDSs is restarted

2023-06-08 Thread Eugen Block
Hi, sorry for not responing earlier. Pardon my ignorance, I'm not quite sure I know what you mean by subtree pinning. I quickly googled it and saw it was a new feature in Luminous. We are running Pacific. I would assume this feature was not out yet. Luminous is older than Pacific, so the

[ceph-users] Bucket resharding in multisite without data replication

2023-06-08 Thread Danny Webb
Hi Ceph users, We have 3 clusters running Pacific 16.2.9 all setup in a multisite configuration with no data replication (we wanted to use per bucket policies but never got them working to our satisfaction). All of the resharding documentation I've found regarding multisite is centred around

[ceph-users] Re: Updating the Grafana SSL certificate in Quincy

2023-06-08 Thread Eugen Block
Hi, can you paste the following output? # ceph config-key list | grep grafana Do you have a mgr/cephadm/grafana_key set? I would check the contents of crt and key and see if they match. A workaround to test the certificate and key pair would be to use a per-host config [1]. Maybe it's

[ceph-users] Re: How to secure erasing a rbd image without encryption?

2023-06-08 Thread Janne Johansson
Den tors 8 juni 2023 kl 09:43 skrev Marc : > > I bumped into an very interesting challenge, how to secure erase a rbd > > image data without any encryption? As Darren replied while I was typing this, you can't have dangerous data written all over a cluster which automatically moves data around,

[ceph-users] Re: How to secure erasing a rbd image without encryption?

2023-06-08 Thread darren
Unfortunately this is impossible to achieve. Unless you can guarantee that the same physical pieces of disk are going to always be mapped to the same parts of the RBD device then you will leave data lying around on the array. How easy it is to recover is a bit of a question about how valuable

[ceph-users] Re: Issues in installing old dumpling version to add a new monitor

2023-06-08 Thread Janne Johansson
> I have a very old Ceph cluster running the old dumpling version 0.67.1. One > of the three monitors suffered a hardware failure and I am setting up a new > server to replace the third monitor running Ubuntu 22.04 LTS (all the other > monitors are using the old Ubuntu 12.04 LTS). > - Try to

[ceph-users] Re: How to secure erasing a rbd image without encryption?

2023-06-08 Thread Marc
> > I bumped into an very interesting challenge, how to secure erase a rbd > image data without any encryption? > > The motivation is to ensure that there is no information leak on OSDs > after deleting a user specified rbd image, without the extra burden of > using rbd encryption. > > any

[ceph-users] Re: Issues in installing old dumpling version to add a new monitor

2023-06-08 Thread Nico Schottelius
Hey, in case building from source does not work out for you, here is a strategy we used to recover older systems before: - Create a .tar from /, pipe it out via ssh to another host - basically take everything with the exception of unwanted mountpoints - Untar it, modify networking, hostname,