Thank you, Igor. I was just reading the detailed list of changes for
16.2.14, as I suspected that we might not be able to go back to the
previous minor release :-) Thanks again for the suggestions, we'll consider
our options.
/Z
On Fri, 20 Oct 2023 at 16:08, Igor Fedotov wrote:
> Zakhar,
>
>
Zakhar,
my general concern about downgrading to previous versions is that this
procedure is generally neither assumed nor tested by dev team. Although
is possible most of the time. But in this specific case it is not doable
due to (at least) https://github.com/ceph/ceph/pull/52212 which
On Fri, Oct 20, 2023, 8:51 AM Zakhar Kirpichenko wrote:
>
> We would consider upgrading, but unfortunately our Openstack Wallaby is
> holding us back as its cinder doesn't support Ceph 17.x, so we're stuck
> with having to find a solution for Ceph 16.x.
>
Wallaby is also quite old at this
Thanks, Tyler. I appreciate what you're saying, though I can't fully agree:
16.2.13 didn't have crashing OSDs, so the crashes in 16.2.14 seem like a
regression - please correct me if I'm wrong. If it is indeed a regression,
then I'm not sure that suggesting to upgrade is the right thing to do in
On Fri, Oct 20, 2023, 8:11 AM Zakhar Kirpichenko wrote:
> Thank you, Igor.
>
> It is somewhat disappointing that fixing this bug in Pacific has such a low
> priority, considering its impact on existing clusters.
>
Unfortunately, the hard truth here is that Pacific (stable) was released
over 30
Thank you, Igor.
It is somewhat disappointing that fixing this bug in Pacific has such a low
priority, considering its impact on existing clusters.
The document attached to the PR explicitly says about
`level_compaction_dynamic_level_bytes` that "enabling it on an existing DB
requires special
Hi Zakhar,
Definitely we expect one more (and apparently the last) Pacific minor
release. There is no specific date yet though - the plans are to release
Quincy and Reef minor releases prior to it. Hopefully to be done before
the Christmas/New Year.
Meanwhile you might want to workaround
Igor, I noticed that there's no roadmap for the next 16.2.x release. May I
ask what time frame we are looking at with regards to a possible fix?
We're experiencing several OSD crashes caused by this issue per day.
/Z
On Mon, 16 Oct 2023 at 14:19, Igor Fedotov wrote:
> That's true.
> On
That's true.
On 16/10/2023 14:13, Zakhar Kirpichenko wrote:
Many thanks, Igor. I found previously submitted bug reports and
subscribed to them. My understanding is that the issue is going to be
fixed in the next Pacific minor release.
/Z
On Mon, 16 Oct 2023 at 14:03, Igor Fedotov wrote:
Many thanks, Igor. I found previously submitted bug reports and subscribed
to them. My understanding is that the issue is going to be fixed in the
next Pacific minor release.
/Z
On Mon, 16 Oct 2023 at 14:03, Igor Fedotov wrote:
> Hi Zakhar,
>
> please see my reply for the post on the similar
Hi Zakhar,
please see my reply for the post on the similar issue at:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/YNJ35HXN4HXF4XWB6IOZ2RKXX7EQCEIY/
Thanks,
Igor
On 16/10/2023 09:26, Zakhar Kirpichenko wrote:
Hi,
After upgrading to Ceph 16.2.14 we had several OSD
Unfortunately, the OSD log from the earlier crash is not available. I have
extracted the OSD log, including the recent events, from the latest crash:
https://www.dropbox.com/scl/fi/1ne8h85iuc5vx78qm1t93/20231016_osd6.zip?rlkey=fxyn242q7c69ec5lkv29csx13=0
I hope this helps to identify the crash
Not sure how it managed to screw up formatting, OSD configuration in a more
readable form: https://pastebin.com/mrC6UdzN
/Z
On Mon, 16 Oct 2023 at 09:26, Zakhar Kirpichenko wrote:
> Hi,
>
> After upgrading to Ceph 16.2.14 we had several OSD crashes
> in bstore_kv_sync thread:
>
>
>1.
13 matches
Mail list logo