[ceph-users] Which RHEL/Fusion/CentOS/Rocky Package Contains cephfs-shell?

2024-03-07 Thread duluxoz
Hi All, The subject pretty much says it all: I need to use cephfs-shell and its not installed on my Ceph Node, and I can't seem to locate which package contains it - help please.  :-) Cheers Dulux-Oz ___ ceph-users mailing list --

[ceph-users] Re: All MGR loop crash

2024-03-07 Thread Eugen Block
Thanks! That's very interesting to know! Zitat von "David C." : some monitors have existed for many years (weight 10) others have been added (weight 0) => https://github.com/ceph/ceph/commit/2d113dedf851995e000d3cce136b69 bfa94b6fe0 Le jeudi 7 mars 2024, Eugen Block a écrit : I’m curious

[ceph-users] Re: All MGR loop crash

2024-03-07 Thread Dieter Roels
Public We ran into this issue last week when upgrading to quincy. We asked ourselves the same question: how did the weight change, as we did not even know that was a thing. We checked our other clusters and we have some where all the mons have a weight of 10, and there it is not an issue.

[ceph-users] rgw dynamic bucket sharding will hang io

2024-03-07 Thread nuabo tan
When reshard occurs, io will be blocked, why has this serious problem not been solved? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Announcing Ceph Day NYC 2024 - April 26th!

2024-03-07 Thread nuabo tan
good news ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph-volume fails when adding spearate DATA and DATA.DB volumes

2024-03-07 Thread service . plant
Oh, dude! You opened my eyes! I thought (it is written this way in documentation) that all the commands need to be executed under cephadm shell. That is why I always ran 'cephadm shell' first, falling down into container env, and then all the rest. Where can I read about proper usage of cephadm

[ceph-users] Announcing Ceph Day NYC 2024 - April 26th!

2024-03-07 Thread Dan van der Ster
Hi everyone, Ceph Days are coming to New York City again this year, co-hosted by Bloomberg Engineering and Clyso! We're planning a full day of Ceph content, well timed to learn about the latest and greatest Squid release. https://ceph.io/en/community/events/2024/ceph-days-nyc/ We're opening

[ceph-users] All MGR loop crash

2024-03-07 Thread David C.
some monitors have existed for many years (weight 10) others have been added (weight 0) => https://github.com/ceph/ceph/commit/2d113dedf851995e000d3cce136b69 bfa94b6fe0 Le jeudi 7 mars 2024, Eugen Block a écrit : > I’m curious how the weights might have been changed. I’ve never touched a > mon

[ceph-users] Re: Remove cluster_network without routing

2024-03-07 Thread Anthony D'Atri
I think heartbeats will failover to the public network if the private doesn't work -- may not have always done that. >> Hi >> Cephadm Reef 18.2.0. >> We would like to remove our cluster_network without stopping the cluster and >> without having to route between the networks. >> global

[ceph-users] Re: Disable signature url in ceph rgw

2024-03-07 Thread Casey Bodley
anything we can do to narrow down the policy issue here? any of the Principal, Action, Resource, or Condition matches could be failing here. you might try replacing each with a wildcard, one at a time, until you see the policy take effect On Wed, Dec 13, 2023 at 5:04 AM Marc Singer wrote: > > Hi

[ceph-users] Re: Minimum amount of nodes needed for stretch mode?

2024-03-07 Thread Stefan Kooman
On 07-03-2024 18:16, Gregory Farnum wrote: On Thu, Mar 7, 2024 at 9:09 AM Stefan Kooman wrote: Hi, TL;DR Failure domain considered is data center. Cluster in stretch mode [1]. - What is the minimum amount of monitor nodes (apart from tie breaker) needed per failure domain? You need at

[ceph-users] Re: All MGR loop crash

2024-03-07 Thread Eugen Block
I’m curious how the weights might have been changed. I’ve never touched a mon weight myself, do you know how that happened? Zitat von "David C." : Ok, got it : [root@pprod-admin:/var/lib/ceph/]# ceph mon dump -f json-pretty |egrep "name|weigh" dumped monmap epoch 14

[ceph-users] Re: All MGR loop crash

2024-03-07 Thread David C.
Ok, got it : [root@pprod-admin:/var/lib/ceph/]# ceph mon dump -f json-pretty |egrep "name|weigh" dumped monmap epoch 14 "min_mon_release_name": "quincy", "name": "pprod-mon2", "weight": 10, "name": "pprod-mon3", "weight": 10, "name":

[ceph-users] Re: All MGR loop crash

2024-03-07 Thread David C.
I took the wrong ligne => https://github.com/ceph/ceph/blob/v17.2.6/src/mon/MonClient.cc#L822 Le jeu. 7 mars 2024 à 18:21, David C. a écrit : > > Hello everybody, > > I'm encountering strange behavior on an infrastructure (it's > pre-production but it's very ugly). After a "drain" on monitor

[ceph-users] Re: Ceph-storage slack access

2024-03-07 Thread Marc
What is this irc access then? Is there some webclient that can be used? Is this ceph.io down? Can't get a website nor a ping. > > The slack workspace is bridged to our also-published irc channels. I > don't think we've done anything to enable xmpp (and two protocols is > enough work to keep

[ceph-users] All MGR loop crash

2024-03-07 Thread David C.
Hello everybody, I'm encountering strange behavior on an infrastructure (it's pre-production but it's very ugly). After a "drain" on monitor (and a manager). MGRs all crash on startup: Mar 07 17:06:47 pprod-mon1 ceph-mgr[564045]: mgr ms_dispatch2 standby mgrmap(e 1310) v1 Mar 07 17:06:47

[ceph-users] Re: Minimum amount of nodes needed for stretch mode?

2024-03-07 Thread Gregory Farnum
On Thu, Mar 7, 2024 at 9:09 AM Stefan Kooman wrote: > > Hi, > > TL;DR > > Failure domain considered is data center. Cluster in stretch mode [1]. > > - What is the minimum amount of monitor nodes (apart from tie breaker) > needed per failure domain? You need at least two monitors per site. This

[ceph-users] Minimum amount of nodes needed for stretch mode?

2024-03-07 Thread Stefan Kooman
Hi, TL;DR Failure domain considered is data center. Cluster in stretch mode [1]. - What is the minimum amount of monitor nodes (apart from tie breaker) needed per failure domain? - What is the minimum amount of storage nodes needed per failure domain? - Are device classes supported with

[ceph-users] Re: Ceph-storage slack access

2024-03-07 Thread Gregory Farnum
The slack workspace is bridged to our also-published irc channels. I don't think we've done anything to enable xmpp (and two protocols is enough work to keep alive!). -Greg On Wed, Mar 6, 2024 at 9:07 AM Marc wrote: > > Is it possible to access this also with xmpp? > > > > > At the very bottom

[ceph-users] Re: erasure-code-lrc Questions regarding repair

2024-03-07 Thread Ansgar Jazdzewski
Hi, I somehow missed your message thanks for your effort to raise this issue Ansgar Am Di., 16. Jan. 2024 um 10:05 Uhr schrieb Eugen Block : > > Hi, > > I don't really have an answer, I just wanted to mention that I created > a tracker issue [1] because I believe there's a bug in the LRC

[ceph-users] Re: Running dedicated RGWs for async tasks

2024-03-07 Thread Konstantin Shalygin
Hi, Yes. You need to turn off gc, lc threads in config for your current (client side) RGW's. Then setup your 'async tasks' RGW without client traffic. No special configuration needed, only if I wanna tune gc, lc settings k Sent from my iPhone > On 7 Mar 2024, at 13:09, Marc Singer wrote: >

[ceph-users] Re: Remove cluster_network without routing

2024-03-07 Thread Torkil Svensgaard
On 13/02/2024 13:31, Torkil Svensgaard wrote: Hi Cephadm Reef 18.2.0. We would like to remove our cluster_network without stopping the cluster and without having to route between the networks. global    advanced  cluster_network    192.168.100.0/24   * global 

[ceph-users] Running dedicated RGWs for async tasks

2024-03-07 Thread Marc Singer
Hello Ceph Users Since we are running a big S3 cluster we would like to externalize the RGW daemons that do async tasks, like: * Garbage collection * Lifecycle policies * Calculating and updating quotas Would this be possible to do in the configuration? Which config values would I need

[ceph-users] Re: Unable to map RBDs after running pg-upmap-primary on the pool

2024-03-07 Thread Torkil Svensgaard
On 07/03/2024 08:52, Torkil Svensgaard wrote: Hi I tried to do offline read optimization[1] this morning but I am now unable to map the RBDs in the pool. I did this prior to running the pg-upmap-primary commands suggested by the optimizer, as suggested by the latest documentation[2]: "

[ceph-users] Re: Ceph is constantly scrubbing 1/4 of all PGs and still have pigs not scrubbed in time

2024-03-07 Thread Eugen Block
Are the scrubs eventually reported as "scrub ok" in the OSD logs? How long do the scrubs take? Do you see updated timestamps in the 'ceph pg dump' output (column DEEP_SCRUB_STAMP)? Zitat von thymus_03fumb...@icloud.com: I recently switched from 16.2.x to 18.2.x and migrated to cephadm,