On 23/09/2020 04:09, Alexander E. Patrakov wrote:
Sometimes this doesn't help. For data recovery purposes, the most
helpful step if you get the "bluefs enospc" error is to add a separate
db device, like this:
systemctl disable --now ceph-osd@${OSDID}
truncate -s 32G
You can also expand the OSD. ceph-bluestore-tool has an option for expansion of
the OSD. I'm not 100% sure if that would solve the rockdb out of space issue. I
think it will, though. If not, you can move rockdb to a separate block device.
September 22, 2020 7:31 PM, "George Shuklin" wrote:
>
On Wed, Sep 23, 2020 at 3:03 AM Ivan Kurnosov wrote:
>
> Hi,
>
> this morning I woke up to a degraded test ceph cluster (managed by rook,
> but it does not really change anything for the question I'm about to ask).
>
> After checking logs I have found that bluestore on one of the OSDs run out
>
As far as I know, bluestore doesn't like super small sizes. Normally odd
should stop doing funny things as full mark, but if device is too small it
may be too late and bluefs run out of space.
Two things:
1. Don't use too small osd
2. Have a spare area on the drive. I usually reserve 1% for