[ceph-users] How to call cephfs-top

2023-04-28 Thread E Taka
I'm using a dockerized Ceph 17.2.6 under Ubuntu 22.04. Presumably I'm missing a very basic thing, since this seems a very simple question: how can I call cephfs-top in my environment? It is not inckuded in the Docker Image which is accessed by "cephadm shell". And calling the version found in

[ceph-users] Re: import OSD after host OS reinstallation

2023-04-28 Thread Eugen Block
I found a small two-node cluster to test this on pacific, I can reproduce it. After reinstalling the host (VM) most of the other services are redeployed (mon, mgr, mds, crash), but not the OSDs. I will take a closer look. Zitat von Tony Liu : Tried [1] already, but got error. Created no

[ceph-users] Re: cephfs - max snapshot limit?

2023-04-28 Thread Milind Changire
FYI, PR - https://github.com/ceph/ceph/pull/51278 On Fri, Apr 28, 2023 at 8:49 AM Milind Changire wrote: > There's a default/hard limit of 50 snaps that's maintained for any dir via > the definition MAX_SNAPS_PER_PATH = 50 in the source file > src/pybind/mgr/snap_schedule/fs/schedule_client.py.

[ceph-users] Re: Help needed to configure erasure coding LRC plugin

2023-04-28 Thread Michel Jouvin
Hi, I think I found a possible cause of my PG down but still understand why. As explained in a previous mail, I setup a 15-chunk/OSD EC pool (k=9, m=6) but I have only 12 OSD servers in the cluster. To workaround the problem I defined the failure domain as 'osd' with the reasoning that as I

[ceph-users] Re: cephfs - max snapshot limit?

2023-04-28 Thread Venky Shankar
Hi Tobias, On Thu, Apr 27, 2023 at 2:42 PM Tobias Hachmer wrote: > > Hi sur5r, > > Am 4/27/23 um 10:33 schrieb Jakob Haufe: > > On Thu, 27 Apr 2023 09:07:10 +0200 > > Tobias Hachmer wrote: > > > >> But we observed that max 50 snapshot are preserved. If a new snapshot is > >> created the

[ceph-users] Re: import OSD after host OS reinstallation

2023-04-28 Thread Eugen Block
Hi, Not sure what's missing. Should OSD be removed, or removed with --replace, or untouched before host reinstallation? If you want to reuse existing OSDs why would you remove them? That's the whole point of reusing them after installation. Tried [1] already, but got error. Created no

[ceph-users] Re: cephfs - max snapshot limit?

2023-04-28 Thread MARTEL Arnaud
Hi Venky, > Also, at one point the kclient wasn't able to handle more than 400 snapshots > (per file system), but we have come a long way from that and that is not a > constraint right now. Does it mean that there is no more limit to the number of snapshots per filesystem? And, if not, do you

[ceph-users] Re: architecture help (iscsi, rbd, backups?)

2023-04-28 Thread Maged Mokhtar
Hello Angelo You can try PetaSAN www.petasan.org We support scale out iscsi with Ceph and is actively developed. /Maged On 27/04/2023 23:05, Angelo Höngens wrote: Hey guys and girls, I'm working on a project to build storage for one of our departments, and I want to ask you guys and girls

[ceph-users] Re: import OSD after host OS reinstallation

2023-04-28 Thread Tony Liu
Thank you Eugen for looking into it! In short, it works. I'm using 16.2.10. What I did wrong was to remove the OSD, which makes no sense. Tony From: Eugen Block Sent: April 28, 2023 06:46 AM To: ceph-users@ceph.io Subject: [ceph-users] Re: import OSD

[ceph-users] Re: How can I use not-replicated pool (replication 1 or raid-0)

2023-04-28 Thread mhnx
Hello Janne, thank you for your response. I understand your advice and be sure that I've designed too many EC pools and I know the mess. This is not an option because I need SPEED. Please let me tell you, my hardware first to meet the same vision. Server: R620 Cpu: 2 x Xeon E5-2630 v2 @ 2.60GHz

[ceph-users] Re: cephfs - max snapshot limit?

2023-04-28 Thread Milind Changire
If a dir doesn't exist at the moment of snapshot creation, then the schedule is deactivated for that dir. On Fri, Apr 28, 2023 at 8:39 PM Jakob Haufe wrote: > On Thu, 27 Apr 2023 11:10:07 +0200 > Tobias Hachmer wrote: > > > > Given the limitation is per directory, I'm currently trying this:

[ceph-users] Re: cephfs - max snapshot limit?

2023-04-28 Thread Jakob Haufe
> FYI, PR - https://github.com/ceph/ceph/pull/51278 Thanks! I just applied this to my cluster and will report back. Looks simple enough, tbh. Cheers, sur5r -- ceterum censeo microsoftem esse delendam. pgp1U9cMc_XaM.pgp Description: OpenPGP digital signature

[ceph-users] Re: cephfs - max snapshot limit?

2023-04-28 Thread Jakob Haufe
On Thu, 27 Apr 2023 11:10:07 +0200 Tobias Hachmer wrote: > > Given the limitation is per directory, I'm currently trying this: > > > > / 1d 30d > > /foo 1h 48h > > /bar 1h 48h > > > > I forgot to activate the new schedules yesterday so I can't say whether > > it works as expected yet.

[ceph-users] Re: How to call cephfs-top

2023-04-28 Thread Jos Collin
On 28/04/23 13:51, E Taka wrote: I'm using a dockerized Ceph 17.2.6 under Ubuntu 22.04. Presumably I'm missing a very basic thing, since this seems a very simple question: how can I call cephfs-top in my environment? It is not inckuded in the Docker Image which is accessed by "cephadm

[ceph-users] Re: Lua scripting in the rados gateway

2023-04-28 Thread Thomas Bennett
Hey Yuval, No problem. It was interesting to me to figure out how it all fits together and works. Thanks for opening an issue on the tracker. Cheers, Tom On Thu, 27 Apr 2023 at 15:03, Yuval Lifshitz wrote: > Hi Thomas, > Thanks for the detailed info! > RGW lua scripting was never tested in a

[ceph-users] Re: For suggestions and best practices on expanding Ceph cluster and removing old nodes

2023-04-28 Thread Thomas Bennett
A pleasure. Hope it helps :) Happy to share if you need any more information Zac. Cheers, Tom On Wed, 26 Apr 2023 at 18:14, Dan van der Ster wrote: > Thanks Tom, this is a very useful post! > I've added our docs guy Zac in cc: IMHO this would be useful in a > "Tips & Tricks" section of the

[ceph-users] Re: import OSD after host OS reinstallation

2023-04-28 Thread Eugen Block
I chatted with Mykola who helped me get the OSDs back up. My test cluster was on 16.2.5 (and still mostly is), after upgrading only the MGRs to a more recent version (16.2.10) the activate command worked successfully and the existing OSDs got back up. Not sure if that's a bug or something