build/ceph-14.2.5/src/common/ceph_time.h: 485: FAILED
ceph_assert(z >= signedspan::zero())\n",
And another one was too big to paste here ;-)
I did a `ceph crash archive-all` and now ceph is OK again :-)
Cheers
/Simon
>
>
> ---- On Fri, 10 Jan 2020 17:37:47 +0800 *Simon Oosthoek
>
Hi,
last week I upgraded our ceph to 14.2.5 (from 14.2.4) and either during
the procedure or shortly after that, some osds crashed. I re-initialised
them and that should be enough to fix everything, I thought.
I looked a bit further and I do see a lot of lines like this (which are
worrying I
I finally took the time to report the bug:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1851470
On 29/10/2019 10:44, Simon Oosthoek wrote:
> On 24/10/2019 16:23, Christopher Wieringa wrote:
>> Hello all,
>>
>>
>>
>> I’ve been using the Ceph kernel m
On 24/10/2019 16:23, Christopher Wieringa wrote:
> Hello all,
>
>
>
> I’ve been using the Ceph kernel modules in Ubuntu to load a CephFS
> filesystem quite successfully for several months. Yesterday, I went
> through a round of updates on my Ubuntu 18.04 machines, which loaded
>
On 26-08-19 13:25, Simon Oosthoek wrote:
On 26-08-19 13:11, Wido den Hollander wrote:
The reweight might actually cause even more confusion for the balancer.
The balancer uses upmap mode and that re-allocates PGs to different OSDs
if needed.
Looking at the output send earlier I have some
On 26-08-19 13:11, Wido den Hollander wrote:
The reweight might actually cause even more confusion for the balancer.
The balancer uses upmap mode and that re-allocates PGs to different OSDs
if needed.
Looking at the output send earlier I have some replies. See below.
Looking at this
-Mensaje original-
De: ceph-users En nombre de Simon
Oosthoek
Enviado el: lunes, 26 de agosto de 2019 11:52
Para: Dan van der Ster
CC: ceph-users
Asunto: Re: [ceph-users] cephfs full, 2/3 Raw capacity used
On 26-08-19 11:37, Dan van der Ster wrote:
Thanks. The version and balancer config look good
done.
Thanks,
/simon
-- dan
On Mon, Aug 26, 2019 at 11:35 AM Simon Oosthoek
wrote:
On 26-08-19 11:16, Dan van der Ster wrote:
Hi,
Which version of ceph are you using? Which balancer mode?
Nautilus (14.2.2), balancer is in upmap mode.
The balancer score isn't a percent-error
native plugins, but my python skills are not
quite up to the task, at least, not yet ;-)
Cheers
/Simon
Cheers, Dan
On Mon, Aug 26, 2019 at 11:09 AM Simon Oosthoek
wrote:
Hi all,
we're building up our experience with our ceph cluster before we take it
into production. I've now tried to fill
Hi all,
we're building up our experience with our ceph cluster before we take it
into production. I've now tried to fill up the cluster with cephfs,
which we plan to use for about 95% of all data on the cluster.
The cephfs pools are full when the cluster reports 67% raw capacity
used. There
On 14/08/2019 10:44, Wido den Hollander wrote:
>
>
> On 8/14/19 9:48 AM, Simon Oosthoek wrote:
>> Is it a good idea to give the above commands or other commands to speed
>> up the backfilling? (e.g. like increasing "osd max backfills")
>>
>
> Yes, a
Hi all,
Yesterday I marked out all the osds on one node in our new cluster to
reconfigure them with WAL/DB on their NVMe devices, but it is taking
ages to rebalance. The whole cluster (and thus the osds) is only ~1%
full, therefore the full ratio is nowhere in sight.
We have 14 osd nodes with 12
12 matches
Mail list logo