Jewel is almost EOL.
It looks similar to several related issues, one of which is
http://tracker.ceph.com/issues/21826
On Mon, Aug 13, 2018 at 9:19 PM, Alexandru Cucu wrote:
> Hi,
>
> Already tried zapping the disk. Unfortunaltely the same segfaults keep
> me from adding the OSD back to the clust
Hi,
Already tried zapping the disk. Unfortunaltely the same segfaults keep
me from adding the OSD back to the cluster.
I wanted to open an issue on tracker.ceph.com but I can't find the
"new issue" button.
---
Alex Cucu
On Mon, Aug 13, 2018 at 8:24 AM wrote:
>
>
>
> Am 3. August 2018 12:03:17
Am 3. August 2018 12:03:17 MESZ schrieb Alexandru Cucu :
>Hello,
>
Hello Alex,
>Another OSD started randomly crashing with segmentation fault. Haven't
>managed to add the last 3 OSDs back to the cluster as the daemons keep
>crashing.
>
An idea could be to remove the osds completely from the C
Hello,
Another OSD started randomly crashing with segmentation fault. Haven't
managed to add the last 3 OSDs back to the cluster as the daemons keep
crashing.
---
-2> 2018-08-03 12:12:52.670076 7f12b6b15700 4 rocksdb:
EVENT_LOG_v1 {"time_micros": 1533287572670073, "job": 3, "event":
"table_
Hello Ceph users,
We have updated our cluster from 10.2.7 to 10.2.11. A few hours after
the update, 1 OSD crashed.
When trying to add the OSD back to the cluster, other 2 OSDs started
crashing with segmentation fault. Had to mark all 3 OSDs as down as we
had stuck PGs and blocked operations and th