Re: [ceph-users] help! pg inactive and slow requests after filestore to bluestore migration, version 12.2.12

2019-12-16 Thread Stefan Kooman
Quoting Jelle de Jong (jelledej...@powercraft.nl): > > It took three days to recover and during this time clients were not > responsive. > > How can I migrate to bluestore without inactive pgs or slow request. I got > several more filestore clusters and I would like to know how to migrate >

Re: [ceph-users] help! pg inactive and slow requests after filestore to bluestore migration, version 12.2.12

2019-12-12 Thread Bryan Stillwell
Jelle, Try putting just the WAL on the Optane NVMe. I'm guessing your DB is too big to fit within 5GB. We used a 5GB journal on our nodes as well, but when we switched to BlueStore (using ceph-volume lvm batch) it created 37GiB logical volumes (200GB SSD / 5 or 400GB SSD / 10) for our DBs.

[ceph-users] help! pg inactive and slow requests after filestore to bluestore migration, version 12.2.12

2019-12-12 Thread Jelle de Jong
Hello everybody, I got a tree node ceph cluster made of E3-1220v3, 24GB ram, 6 hdd osd's with 32GB Intel Optane NVMe journal, 10GB networking. I wanted to move to bluestore due to dropping support of filestore, our cluster was working fine with filestore and we could take complete nodes out

[ceph-users] help! pg inactive and slow requests after filestore to bluestore migration, version 12.2.12

2019-12-06 Thread Jelle de Jong
Hello everybody, [fix confusing typo] I got a tree node ceph cluster made of E3-1220v3, 24GB ram, 6 hdd osd's with 32GB Intel Optane NVMe journal, 10GB networking. I wanted to move to bluestore due to dropping support of filestore, our cluster was working fine with filestore and we could

[ceph-users] help! pg inactive and slow requests after filestore to bluestore migration, version 12.2.12

2019-12-06 Thread Jelle de Jong
Hello everybody, I got a tree node ceph cluster made of E3-1220v3, 24GB ram, 6 hdd osd's with 32GB Intel Optane NVMe journal, 10GB networking. I wanted to move to bluestore due to dropping support of file store, our cluster was working fine with bluestore and we could take complete nodes