I have 2 OSDs failing to start due to this [1] segfault. What is happening
matches what Sage said about this [2] bug. The OSDs are NVMe disks and
rocksdb is compacting omaps. I attempted setting `bluestore_bluefs_min_free
= 10737418240` and then start the OSDs, but they both segfaulted with the
Hi Jakub,
for the crashing OSD could you please set
debug_bluestore=10
bluestore_bluefs_balance_failure_dump_interval=1
and collect more logs.
This will hopefully provide more insight on why additional space isn't
allocated for bluefs.
Thanks,
Igor
On 8/14/2018 12:41 PM, Jakub
Hello All!
I am using mimic full bluestore cluster with pure RGW workload. We use AWS
i3 instance family for osd machines - each instance has 1 NVMe disk which
is split into 4 partitions and each of those partitions is devoted to
bluestore block device. We use 1 device per partition - so