[ceph-users] Re: Latest Doco Out Of Date?

2024-04-23 Thread Zac Dover
It's in my list of ongoing initiatives. I'll stay up late tonight and ask Venky directly what's going on in this instance. Sometime later today, I'll create an issue tracking bug and I'll send it to you for review. Make sure that I haven't misrepresented this issue. Zac On Wednesday, April

[ceph-users] Re: Latest Doco Out Of Date?

2024-04-23 Thread duluxoz
Hi Zac, Any movement on this? We really need to come up with an answer/solution - thanks Dulux-Oz On 19/04/2024 18:03, duluxoz wrote: Cool! Thanks for that  :-) On 19/04/2024 18:01, Zac Dover wrote: I think I understand, after more thought. The second command is expected to work after

[ceph-users] Orchestrator not automating services / OSD issue

2024-04-23 Thread Michael Baer
Hi, This problem started with trying to add a new storage server into a quincy v17.2.6 ceph cluster. Whatever I did, I could not add the drives on the new host as OSDs: via dashboard, via cephadm shell, by setting osd unmanaged to false. But what I started realizing is that orchestrator will

[ceph-users] Re: rbd-mirror failed to query services: (13) Permission denied

2024-04-23 Thread Stefan Kooman
On 23-04-2024 17:44, Ilya Dryomov wrote: On Mon, Apr 22, 2024 at 7:45 PM Stefan Kooman wrote: Hi, We are testing rbd-mirroring. There seems to be a permission error with the rbd-mirror user. Using this user to query the mirror pool status gives: failed to query services: (13) Permission

[ceph-users] List of bridges irc/slack/discord

2024-04-23 Thread Alvaro Soto
(Last update) - https://github.com/orgs/opensource-latinamerica/discussions/3 ~~~ Adding a few unofficial/unregistered Ceph IRC channels (cephadm, crimson) IRC -> slack.oss.lat OFTC: starlingx -> slack: starlingx OFTC: openstack-latinamerica -> slack: stack-latinamerica OFTC: openstack-freezer

[ceph-users] Re: Best practice and expected benefits of using separate WAL and DB devices with Bluestore

2024-04-23 Thread Anthony D'Atri
> On Apr 23, 2024, at 12:24, Maged Mokhtar wrote: > > For nvme:HDD ratio, yes you can go for 1:10, or if you have extra slots you > can use 1:5 using smaller capacity/cheaper nvmes, this will reduce the impact > of nvme failures. On occasion I've seen a suggestion to mirror the fast

[ceph-users] Re: Best practice and expected benefits of using separate WAL and DB devices with Bluestore

2024-04-23 Thread Maged Mokhtar
On 19/04/2024 11:02, Niklaus Hofer wrote: Dear all We have an HDD ceph cluster that could do with some more IOPS. One solution we are considering is installing NVMe SSDs into the storage nodes and using them as WAL- and/or DB devices for the Bluestore OSDs. However, we have some questions

[ceph-users] Re: which grafana version to use with 17.2.x ceph version

2024-04-23 Thread Adam King
FWIW, cephadm uses `quay.io/ceph/ceph-grafana:9.4.7` as the default grafana image in the quincy branch On Tue, Apr 23, 2024 at 11:59 AM Osama Elswah wrote: > Hi, > > > in quay.io I can find a lot of grafana versions for ceph ( > https://quay.io/repository/ceph/grafana?tab=tags) how can I find

[ceph-users] which grafana version to use with 17.2.x ceph version

2024-04-23 Thread Osama Elswah
Hi, in quay.io I can find a lot of grafana versions for ceph (https://quay.io/repository/ceph/grafana?tab=tags) how can I find out which version should be used when I upgrade my cluster to 17.2.x ? Can I simply take the latest grafana version? Or is there a specfic grafana version I need to

[ceph-users] Re: rbd-mirror failed to query services: (13) Permission denied

2024-04-23 Thread Ilya Dryomov
On Mon, Apr 22, 2024 at 7:45 PM Stefan Kooman wrote: > > Hi, > > We are testing rbd-mirroring. There seems to be a permission error with > the rbd-mirror user. Using this user to query the mirror pool status gives: > > failed to query services: (13) Permission denied > > And results in the

[ceph-users] Re: Status of IPv4 / IPv6 dual stack?

2024-04-23 Thread Anthony D'Atri
Sounds like an opportunity for you to submit an expansive code PR to implement it. > On Apr 23, 2024, at 04:28, Marc wrote: > >> I have removed dual-stack-mode-related information from the documentation >> on the assumption that dual-stack mode was planned but never fully >> implemented. >>

[ceph-users] cache pressure?

2024-04-23 Thread Erich Weiler
So I'm trying to figure out ways to reduce the number of warnings I'm getting and I'm thinking about the one "client failing to respond to cache pressure". Is there maybe a way to tell a client (or all clients) to reduce the amount of cache it uses or to release caches quickly? Like, all the

[ceph-users] Re: Why CEPH is better than other storage solutions?

2024-04-23 Thread Frédéric Nass
Exactly, strong consistency is why we chose Ceph over other SDS solutions back in 2014 (and disabled any non persistent cache along the IO path like HDD disk cache). A major power outage in our town a few years back (a few days before Christmas) and a ups malfunction has proven us right.

[ceph-users] Re: stretched cluster new pool and second pool with nvme

2024-04-23 Thread Stefan Kooman
On 23-04-2024 14:40, Eugen Block wrote: Hi, whats the right way to add another pool? create pool with 4/2 and use the rule for the stretched mode, finished? the exsisting pools were automaticly set to 4/2 after "ceph mon enable_stretch_mode". It should be that simple. However, it does not

[ceph-users] Re: stretched cluster new pool and second pool with nvme

2024-04-23 Thread Eugen Block
Hi, whats the right way to add another pool? create pool with 4/2 and use the rule for the stretched mode, finished? the exsisting pools were automaticly set to 4/2 after "ceph mon enable_stretch_mode". if that is what you require, then yes, it's as easy as that. Although I haven't played

[ceph-users] Re: Why CEPH is better than other storage solutions?

2024-04-23 Thread Brett Niver
Well said! Brett On Tue, Apr 23, 2024 at 7:05 AM Janne Johansson wrote: > Den tis 23 apr. 2024 kl 11:32 skrev Frédéric Nass > : > > Ceph is strongly consistent. Either you read/write objects/blocs/files > with an insured strong consistency OR you don't. Worst thing you can expect > from Ceph,

[ceph-users] Re: Why CEPH is better than other storage solutions?

2024-04-23 Thread Janne Johansson
Den tis 23 apr. 2024 kl 11:32 skrev Frédéric Nass : > Ceph is strongly consistent. Either you read/write objects/blocs/files with > an insured strong consistency OR you don't. Worst thing you can expect from > Ceph, as long as it's been properly designed, configured and operated is a >

[ceph-users] Re: Stuck in replay?

2024-04-23 Thread David Yang
Hi Erich When mds cache usage is very high, recovery is very slow. So I use command to drop mds cache: ceph tell mds.* cache drop 600 Lars Köppel 于2024年4月23日周二 16:36写道: > > Hi Erich, > > great that you recovered from this. > It sounds like you had the same problem I had a few months ago. > mds

[ceph-users] s3 bucket policy subusers - access denied

2024-04-23 Thread sinan
I want to achieve the following: - Create an user - Create 2 subusers - Create 2 buckets - Apply a policy for each bucket - A subuser should only have access to its own bucket Problem: Getting a 403 AccessDenied with subuser credentials when uploading files. I did the following:

[ceph-users] Re: rbd-mirror failed to query services: (13) Permission denied

2024-04-23 Thread Eugen Block
I'm not entirely sure if I ever tried it with the rbd-mirror user instead of admin user, but I see the same error message on 17.2.7. I assume that it's not expected, I think a tracker issue makes sense. Thanks, Eugen Zitat von Stefan Kooman : Hi, We are testing rbd-mirroring. There seems

[ceph-users] Re: Why CEPH is better than other storage solutions?

2024-04-23 Thread Frédéric Nass
Hello, My turn ;-) Ceph is strongly consistent. Either you read/write objects/blocs/files with an insured strong consistency OR you don't. Worst thing you can expect from Ceph, as long as it's been properly designed, configured and operated is a temporary loss of access to the data. There

[ceph-users] Re: Stuck in replay?

2024-04-23 Thread Lars Köppel
Hi Erich, great that you recovered from this. It sounds like you had the same problem I had a few months ago. mds crashes after up:replay state - ceph-users - lists.ceph.io

[ceph-users] Re: Status of IPv4 / IPv6 dual stack?

2024-04-23 Thread Marc
> I have removed dual-stack-mode-related information from the documentation > on the assumption that dual-stack mode was planned but never fully > implemented. > > See https://tracker.ceph.com/issues/65631. > > See https://github.com/ceph/ceph/pull/57051. > > Hat-tip to Dan van der Ster, who

[ceph-users] Re: Status of IPv4 / IPv6 dual stack?

2024-04-23 Thread Zac Dover
I have removed dual-stack-mode-related information from the documentation on the assumption that dual-stack mode was planned but never fully implemented. See https://tracker.ceph.com/issues/65631. See https://github.com/ceph/ceph/pull/57051. Hat-tip to Dan van der Ster, who bumped this thread