Hi Mark,

Firstly, many thanks for looking into this.

Jayaram appears to have a similar config to me;

v12.0.3, EC 4+1 bluestore
SciLin,7.3 - 3.10.0-514.21.1.el7.x86_64

I have 5 EC nodes (10 x 8TB ironwolf each) plus 2 nodes with replicated
NVMe (Cephfs hot tier)

I now think the Highpoint r750 rocket cards are not at fault; I swapped
the r750 on one node for LSI cards, but still had OSD errors occurring
on this node.

The OSD logs recovered are huge; typically exceeding 500MB. I've trimmed
one down to just over 500K, and put this on pasted here...

http://pasted.co/f0c49591

Hopefully this has some useful info.


One other thing, probably a red herring, is that when trying to run
"ceph pg repair" or "ceph pg deep-scrub" I get this...
"Error EACCES: access denied"

thanks again for your help,

Jake
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to