[ceph-users] Re: orchestrator behaving strangely

2025-06-30 Thread Frédéric Nass
Hi Holger, In addition to Eugen's sound advice, I would try restarting the OSD in question. If that doesn't help, I would stop the current deletion process 'ceph orch osd rm stop 406' and restart it 'ceph orch osd rm 406 --force'. As for the orchestrator logs, you can use the command 'ceph log

[ceph-users] Re: Suspiciously low PG count for CephFS with many small files

2025-06-30 Thread Niklas Hambüchen
Hi Anthony and others, I have now increased the number of PGs on my cluster, but the results are a bit surprising: I increased the settings by 4x and obtained a PG increase by 8x. Wondering if you have insights why that might be. Details: Defaults: mon_target_pg_per_osd 100 mon

[ceph-users] Re: Suspiciously low PG count for CephFS with many small files

2025-06-30 Thread Anthony D'Atri
> Hi Anthony and others, > > I have now increased the number of PGs on my cluster, but the results are a > bit surprising: > I increased the settings by 4x and obtained a PG increase by 8x. > > Wondering if you have insights why that might be. Your prior values were extremely low, which no do

[ceph-users] Re: Get past epoch pg map

2025-06-30 Thread Michel Jouvin
Hi Darrell, I am not sure to understand what you mean by "all the PGs have been relocated, I don't know what the PG numbers are". PG numbers never change. So if you know the PG number you want to check, you should be able to check its status, by comparing the last deep scrubs date with the dat

[ceph-users] RADOS Gateway IAM for Production Environments ( reef to squid upgrade )

2025-06-30 Thread ramo...@yahoo.com
Hi Everyone ! I am planning to upgrade ceph from reef to squid to benefit from RGW IAM .Does anybody know is it still a tech-preview ?Would an update be a nice solution to have this feature ?  ___ ceph-users mailing list -- ceph-users@ceph.io To unsubs

[ceph-users] Failure to build on a devuan.

2025-06-30 Thread robin hammond
I just attempted to build the lasted tarbar on a freshly installed and updated devuan daedalus system. Linux devuan-ceph 6.1.0-37-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.140-1 (2025-05-22) x86_64 GNU/Linux I ran ./install-deps.sh as per https://docs.ceph.com/en/latest/install/build-ceph/ b

[ceph-users] Code of Conduct

2025-06-30 Thread Anthony Middleton
Hello Ceph Community, I'm reaching out regarding a recent dispute on our public mailing lists. In line with our commitment to uphold the Ceph Code of Conduct ( https://ceph.io/en/code-of-conduct/), I've started a resolution process in close collaboration with the Ceph Steering Committee and Ceph E

[ceph-users] Re: RADOS Gateway IAM for Production Environments ( reef to squid upgrade )

2025-06-30 Thread Casey Bodley
the user account and IAM features are fully supported in squid - not "experimental" or "tech preview" On Mon, Jun 30, 2025 at 10:09 AM ramo...@yahoo.com wrote: > > Hi Everyone ! I am planning to upgrade ceph from reef to squid to benefit > from RGW IAM .Does anybody know is it still a tech-previ

[ceph-users] Re: ceph squid - huge difference between capacity reported by "ceph -s" and "ceph df "

2025-06-30 Thread Anthony D'Atri
> Hi Anthony > > Appreciate you taking the time to provide so much guidance > All your messages in this mailing list are well documented and VERY helpful You’re most welcome. The community is a vital part of Ceph. > I have attached a text file with the output of the commands you mentioned

[ceph-users] Re: CephFS: no MDS does join the filesystem

2025-06-30 Thread Robert Sander
Hi, Am 6/30/25 um 15:34 schrieb Bailey Allison: I'm pretty sure the reason is due to the damaged MDS daemon. If you are able to clear that up it should allow the filesystem to come back up. I seen something like this a few months ago. We were just able to mark the mds as "repaired" and haven't

[ceph-users] Re: CephFS: no MDS does join the filesystem

2025-06-30 Thread Bailey Allison
I'm pretty sure the reason is due to the damaged MDS daemon. If you are able to clear that up it should allow the filesystem to come back up. I seen something like this a few months ago. We were just able to mark the mds as "repaired" and haven't seen any issue since, however I would discourage

[ceph-users] Re: Re-uploading existing s3 objects slower than initial upload

2025-06-30 Thread Roberto Valverde Cameselle
Hi Sinan, Do you have versioning enabled on the bucket? It may explain that.. - Roberto VALVERDE CAMESELLE IT Storage And Data Management CERN, European Organization for Nuclear Research Esplanade des Particules 1, Geneve (Switzerland

[ceph-users] CephFS: no MDS does join the filesystem

2025-06-30 Thread Robert Sander
Hi, we are having an issue at a customer site where a 3PB CephFS is in failed state. The cluster itself is unhealthy and awaits replacements disks: # ceph -s cluster: id: 28ca2bfa-d87e-11ed-83a3-1070fddda30f health: HEALTH_ERR 4 failed cephadm daemon(s) The

[ceph-users] Re: Suspiciously low PG count for CephFS with many small files

2025-06-30 Thread Anthony D'Atri
> (Adding back the list) I had written privately on purpose. > >> https://docs.clyso.com/docs/kb/rados/#osd---improved-procedure-for-adding-hosts-or-osds >> >> > > Is the upstream source for `upmap-rem

[ceph-users] Re: CephFS with Ldap

2025-06-30 Thread Harry G Coin
To get ldap working, we had to set up samba to manage the shares (it has the ability to do ldap auth, connecting the smb accounts to the linux ownership/permission space). It would be a very nice help if ceph would include a native, secondary ldap option, if only just for anything doing file o

[ceph-users] Re: Latest stable version of Ceph

2025-06-30 Thread gagan tiwari
Thanks Guys for your help. I will go for v19.2.2 Squid. One thing that I want to know is whether Ceph MDS is still single threaded or multithreaded in the latest v19.2.2 Squid ? If not, any plan to upgrade it to multithreaded in the new release to take full advantage of multicores server. Th

[ceph-users] Re: orchestrator behaving strangely

2025-06-30 Thread Holger Naundorf
Thanks Eugen + Frédéric for the hints, currently I am still fishing for reasons - but it seems that the osd has been removed, but the 'purge' did not finish completely. ceph -s is just doing scrubs: # ceph -s cluster: id: e776dd57-7fc6-11ee-9f23-9bb83aca7b4b health: HEALTH_OK

[ceph-users] Re: Suspiciously low PG count for CephFS with many small files

2025-06-30 Thread Anthony D'Atri
> On Jun 30, 2025, at 9:53 AM, Niklas Hambüchen wrote: > >> Is this cluster serving RGW? RBD? CephFS? Those pool names are unusual. > > Just CephFS. > I named the pools this way, following > https://docs.ceph.com/en/reef/cephfs/createfs/#creating-a-file-system That I think shows names like

[ceph-users] Re: Latest stable version of Ceph

2025-06-30 Thread Michel Jouvin
Gagan, I am not sure about your question. But if you have (potential) performance issue, have you read https://docs.ceph.com/en/latest/cephfs/multimds/. The Ceph mainstream approach for improving performances, if you have multiple clients/apps, is to run multiple MDS for your file system. M

[ceph-users] Re: Suspiciously low PG count for CephFS with many small files

2025-06-30 Thread Niklas Hambüchen
(Adding back the list) https://docs.clyso.com/docs/kb/rados/#osd---improved-procedure-for-adding-hosts-or-osds Is the upstream source for `upmap-remapped.py` the `ceph-scripts` repo? If yes, is it know

[ceph-users] Re: Suspiciously low PG count for CephFS with many small files

2025-06-30 Thread Niklas Hambüchen
Is this cluster serving RGW?  RBD? CephFS? Those pool names are unusual. Just CephFS. I named the pools this way, following https://docs.ceph.com/en/reef/cephfs/createfs/#creating-a-file-system I suggested 250. Yes, but it is actually great to have even less objects per PG, because then I'

[ceph-users] CephFS with Ldap

2025-06-30 Thread gagan tiwari
Hi Guys, We have a Ldap server with all users login details. We have to mount data stored in Ceph to several client nodes via CephFS so that users can access that data and start using that data in their processes. But we need to grant permission / ownership to users to enable th

[ceph-users] Re: CephFS with Ldap

2025-06-30 Thread Ryan Sleeth
Related to ldap, this new implementation of a SMB manager was mentioned on the subreddit: https://ceph.io/en/news/blog/2025/smb-manager-module/ It appears to include functionality to directly interact with AD. Not sure if it would work with more complex smb.conf needs. On Mon, Jun 30, 2025 at 2:3

[ceph-users] Re: CephFS with Ldap

2025-06-30 Thread Burkhard Linke
Hi, On 30.06.25 18:26, gagan tiwari wrote: Hi Guys, We have a Ldap server with all users login details. We have to mount data stored in Ceph to several client nodes via CephFS so that users can access that data and start using that data in their processes. But we need to gra

[ceph-users] Re: RADOS Gateway IAM for Production Environments ( reef to squid upgrade )

2025-06-30 Thread Stefan Kooman
Hi Casey, On 6/30/25 20:35, Casey Bodley wrote: the user account and IAM features are fully supported in squid - not "experimental" or "tech preview" That's good to know. In this blog post [1] however the very start of it states: "Efficient multitenant environment management is critical in

[ceph-users] Re: Failure to build on a devuan.

2025-06-30 Thread robin hammond
Just pulled from git, Same error Performing C SOURCE FILE Test HAVE_STAT_ST_MTIMESPEC_TV_NSEC failed with the following output: Change Dir: /usr/local/src/foo/ceph/build/CMakeFiles/CMakeScratch/TryCompile-ElG6T9 Run Build Command(s):/usr/bin/ninja cmTC_00cc2 && [1/2] Building C object CMak

[ceph-users] Loki retention policy missing?

2025-06-30 Thread Stefan Kooman
Hi, Yesterday we got a disk space warning notification on one of the monitor nodes. Most space consumption was in use by a running Loki container. Looking into online documentation / trackers to find info on Ceph's default Loki retention policy left me empty handed. I did fine some trackers f

[ceph-users] Re: ceph squid - huge difference between capacity reported by "ceph -s" and "ceph df "

2025-06-30 Thread Steven Vacaroaia
Hi Anthony Appreciate you taking the time to provide so much guidance All your messages in this mailing list are well documented and VERY helpful I have attached a text file with the output of the commands you mentioned You are right , there is no /dev/bluefs_/bluefsstore_bdev pointing to the NV

[ceph-users] Re: Latest stable version of Ceph

2025-06-30 Thread Patrick Begou
Hi, just starting with a new (small) cluster running Almalinux9 . I've also deployed 19.2.2 with Cephadm. All went fine. Still not in production, I'm in the benchmarking step. Patrick Le 27/06/2025 à 18:15, Michel Jouvin a écrit : Ho, If you're starting a new cluster I'd advise to start wi