wrote:
> Den ons 3 juli 2019 kl 20:51 skrev Austin Workman :
>
>>
>> But a very strange number shows up in the active sections of the pg's
>> that's the same number roughly as 2147483648. This seems very odd,
>> and maybe the value got lodged somewhere it d
So several events unfolded that may have led to this situation. Some of
them in hindsight were probably not the smartest decision around adjusting
the ec pool and restarting the OSD's several times during these migrations.
1. Added a new 6th OSD with ceph-ansible
1. Hung during restart
PM Austin Workman wrote:
> So several events unfolded that may have led to this situation. Some of
> them in hindsight were probably not the smartest decision around adjusting
> the ec pool and restarting the OSD's several times during these migrations.
>
>
>1. Added a new
2961a7/osd-semi-healthy
After all of this, i'm going to make a new cephfs filesystem with a new
metadata/data pool with the newer ec settings to copy all of the data over
into with fresh PG's, and might consider moving to k=4,m=2 instead ;)
On Wed, Jul 3, 2019 at 2:28 PM Austin Workman wrote:
&g