It's in my list of ongoing initiatives. I'll stay up late tonight and ask Venky
directly what's going on in this instance.
Sometime later today, I'll create an issue tracking bug and I'll send it to you
for review. Make sure that I haven't misrepresented this issue.
Zac
On Wednesday, April
Hi Zac,
Any movement on this? We really need to come up with an answer/solution
- thanks
Dulux-Oz
On 19/04/2024 18:03, duluxoz wrote:
Cool!
Thanks for that :-)
On 19/04/2024 18:01, Zac Dover wrote:
I think I understand, after more thought. The second command is
expected to work after
Hi,
This problem started with trying to add a new storage server into a
quincy v17.2.6 ceph cluster. Whatever I did, I could not add the drives
on the new host as OSDs: via dashboard, via cephadm shell, by setting
osd unmanaged to false.
But what I started realizing is that orchestrator will
On 23-04-2024 17:44, Ilya Dryomov wrote:
On Mon, Apr 22, 2024 at 7:45 PM Stefan Kooman wrote:
Hi,
We are testing rbd-mirroring. There seems to be a permission error with
the rbd-mirror user. Using this user to query the mirror pool status gives:
failed to query services: (13) Permission
(Last update) -
https://github.com/orgs/opensource-latinamerica/discussions/3
~~~
Adding a few unofficial/unregistered Ceph IRC channels (cephadm, crimson)
IRC -> slack.oss.lat
OFTC: starlingx -> slack: starlingx
OFTC: openstack-latinamerica -> slack: stack-latinamerica
OFTC: openstack-freezer
> On Apr 23, 2024, at 12:24, Maged Mokhtar wrote:
>
> For nvme:HDD ratio, yes you can go for 1:10, or if you have extra slots you
> can use 1:5 using smaller capacity/cheaper nvmes, this will reduce the impact
> of nvme failures.
On occasion I've seen a suggestion to mirror the fast
On 19/04/2024 11:02, Niklaus Hofer wrote:
Dear all
We have an HDD ceph cluster that could do with some more IOPS. One
solution we are considering is installing NVMe SSDs into the storage
nodes and using them as WAL- and/or DB devices for the Bluestore OSDs.
However, we have some questions
FWIW, cephadm uses `quay.io/ceph/ceph-grafana:9.4.7` as the default grafana
image in the quincy branch
On Tue, Apr 23, 2024 at 11:59 AM Osama Elswah
wrote:
> Hi,
>
>
> in quay.io I can find a lot of grafana versions for ceph (
> https://quay.io/repository/ceph/grafana?tab=tags) how can I find
Hi,
in quay.io I can find a lot of grafana versions for ceph
(https://quay.io/repository/ceph/grafana?tab=tags) how can I find out which
version should be used when I upgrade my cluster to 17.2.x ? Can I simply take
the latest grafana version? Or is there a specfic grafana version I need to
On Mon, Apr 22, 2024 at 7:45 PM Stefan Kooman wrote:
>
> Hi,
>
> We are testing rbd-mirroring. There seems to be a permission error with
> the rbd-mirror user. Using this user to query the mirror pool status gives:
>
> failed to query services: (13) Permission denied
>
> And results in the
Sounds like an opportunity for you to submit an expansive code PR to implement
it.
> On Apr 23, 2024, at 04:28, Marc wrote:
>
>> I have removed dual-stack-mode-related information from the documentation
>> on the assumption that dual-stack mode was planned but never fully
>> implemented.
>>
So I'm trying to figure out ways to reduce the number of warnings I'm
getting and I'm thinking about the one "client failing to respond to
cache pressure".
Is there maybe a way to tell a client (or all clients) to reduce the
amount of cache it uses or to release caches quickly? Like, all the
Exactly, strong consistency is why we chose Ceph over other SDS solutions back
in 2014 (and disabled any non persistent cache along the IO path like HDD disk
cache).
A major power outage in our town a few years back (a few days before Christmas)
and a ups malfunction has proven us right.
On 23-04-2024 14:40, Eugen Block wrote:
Hi,
whats the right way to add another pool?
create pool with 4/2 and use the rule for the stretched mode, finished?
the exsisting pools were automaticly set to 4/2 after "ceph mon
enable_stretch_mode".
It should be that simple. However, it does not
Hi,
whats the right way to add another pool?
create pool with 4/2 and use the rule for the stretched mode, finished?
the exsisting pools were automaticly set to 4/2 after "ceph mon
enable_stretch_mode".
if that is what you require, then yes, it's as easy as that. Although
I haven't played
Well said!
Brett
On Tue, Apr 23, 2024 at 7:05 AM Janne Johansson wrote:
> Den tis 23 apr. 2024 kl 11:32 skrev Frédéric Nass
> :
> > Ceph is strongly consistent. Either you read/write objects/blocs/files
> with an insured strong consistency OR you don't. Worst thing you can expect
> from Ceph,
Den tis 23 apr. 2024 kl 11:32 skrev Frédéric Nass
:
> Ceph is strongly consistent. Either you read/write objects/blocs/files with
> an insured strong consistency OR you don't. Worst thing you can expect from
> Ceph, as long as it's been properly designed, configured and operated is a
>
Hi Erich
When mds cache usage is very high, recovery is very slow.
So I use command to drop mds cache:
ceph tell mds.* cache drop 600
Lars Köppel 于2024年4月23日周二 16:36写道:
>
> Hi Erich,
>
> great that you recovered from this.
> It sounds like you had the same problem I had a few months ago.
> mds
I want to achieve the following:
- Create an user
- Create 2 subusers
- Create 2 buckets
- Apply a policy for each bucket
- A subuser should only have access to its own bucket
Problem:
Getting a 403 AccessDenied with subuser credentials when uploading
files.
I did the following:
I'm not entirely sure if I ever tried it with the rbd-mirror user
instead of admin user, but I see the same error message on 17.2.7. I
assume that it's not expected, I think a tracker issue makes sense.
Thanks,
Eugen
Zitat von Stefan Kooman :
Hi,
We are testing rbd-mirroring. There seems
Hello,
My turn ;-)
Ceph is strongly consistent. Either you read/write objects/blocs/files with an
insured strong consistency OR you don't. Worst thing you can expect from Ceph,
as long as it's been properly designed, configured and operated is a temporary
loss of access to the data.
There
Hi Erich,
great that you recovered from this.
It sounds like you had the same problem I had a few months ago.
mds crashes after up:replay state - ceph-users - lists.ceph.io
> I have removed dual-stack-mode-related information from the documentation
> on the assumption that dual-stack mode was planned but never fully
> implemented.
>
> See https://tracker.ceph.com/issues/65631.
>
> See https://github.com/ceph/ceph/pull/57051.
>
> Hat-tip to Dan van der Ster, who
I have removed dual-stack-mode-related information from the documentation on
the assumption that dual-stack mode was planned but never fully implemented.
See https://tracker.ceph.com/issues/65631.
See https://github.com/ceph/ceph/pull/57051.
Hat-tip to Dan van der Ster, who bumped this thread
24 matches
Mail list logo