"name": "osd.4",
"type": "osd",
"type_id": 0,
"crush_weight": 4.656998,
"depth": 2
}
]
}
]
}
]
and 'ceph osd crush
hi,
I have a problem with a cluster being stuck in recovery after osd
failure. at first recovery was doing quite well, but now it just sits
there without any progress. I currently looks like this:
health HEALTH_ERR
36 pgs are stuck inactive for more than 300 seconds
Hi List,
I have an issue with an rbd device. I have an rbd device on which I
created a file system. When I copy files to the file system I get issues
about failing to write to a sector to sectors on the rbd block device.
I see the following in the log file:
[88931.224311] rbd: rbd0: write 8
start with a really small cluster and then grow by adding
osds.
thanks & best regards
Philipp
> Cheers, I
>
> 2015-11-06 22:05 GMT+01:00 Philipp Schwaha <phil...@schwaha.net
> <mailto:phil...@schwaha.net>>:
>
> On 11/06/2015 09:25 PM, Gregory Farnum
Hi,
I have an issue with my (small) ceph cluster after an osd failed.
ceph -s reports the following:
cluster 2752438a-a33e-4df4-b9ec-beae32d00aad
health HEALTH_WARN
31 pgs down
31 pgs peering
31 pgs stuck inactive
31 pgs stuck unclean
) mount initial op seq is 0; something
is wrong
2015-11-06 21:42:06.385027 7f44755a77c0 -1 osd.2 0 OSD:init: unable to
mount object store
2015-11-06 21:42:06.385076 7f44755a77c0 -1 ** ERROR: osd init failed:
(22) Invalid argument
> On Friday, November 6, 2015, Philipp Schwaha <phil...@schwa