Hi,
It seems that one of your OSD server are dead. If you use the default
setting of Ceph(size=3, min_size=2), there should have three OSD nodes to
distribute objects' replicas. The important one is that, you only have one
OSD node alive. The living object replication leave 1 (< min_size). So
Hi David,
Apologies for the late response.
NodeB is mon+client, nodeC is client:
Cheph health details:
HEALTH_ERR 819 pgs are stuck inactive for more than 300 seconds; 883 pgs
degraded; 64 pgs stale; 819 pgs stuck inactive; 1064 pgs stuck unclean; 883
pgs undersized; 22 requests are blocked
Is this a test cluster that has never been healthy or a working cluster
which has just gone unhealthy? Have you changed anything? Are all hosts,
drives, network links working? More detail please. Any/all of the following
would help:
ceph health detail
ceph osd stat
ceph osd tree
Your ceph.conf
Hi All,
please assist to fix the error:
1 X admin
2 X admin(hosting admin as well)
4 osd each node
cluster a04e9846-6c54-48ee-b26f-d6949d8bacb4
health HEALTH_ERR
819 pgs are stuck inactive for more than 300 seconds
883 pgs degraded
64 pgs stale