Dear Community,

 

here at ZIMK at the University of Trier we operate a Ceph Luminous Cluster
as filer for a HPC environment via CephFS (Bluestore backend). During setup
last year we made the mistake of not configuring the RAID as JBOD, so
initially the 3 nodes only housed 1 OSD each. Currently, we are in the
process of remediating this. After a loss of metadata due to resetting the
journal (journal entries were not being flushed fast enough), we managed to
bring the cluster back up and started adding 2 additional nodes. The
hardware is a little bit older than the first 3 nodes. We configured the
drives on these individually (RAID-0 on each disk since there is no
pass-through mode on the controller) and after some rebalancing and
re-weight, the first of the original nodes is now empty and ready to be
re-installed.

 

However, due to the aforementioned metadata loss, we are currently getting
warnings about metadata damage. 

damage ls shows, that only one folder is affected. As we don’t need this
folder, we’d like to delete it and the associated metadata and other
informations if possible. Taking the cluster offline for a data-scan right
now would be a little bit difficult, so any other suggestions would be
appreciated.

 

Cluster health details are available here:
https://gitlab.uni-trier.de/snippets/65 

 

Regards

Christian Hennen

 

Project Manager Infrastructural Services

Zentrum für Informations-, Medien-

und Kommunikationstechnologie (ZIMK)

Universität Trier

54286 Trier

 

Tel.: +49 651 201 3488

Fax: +49 651 201 3921

E-Mail:  <mailto:[email protected]>
[email protected]

Web:  <http://zimk.uni-trier.de/> http://zimk.uni-trier.de

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to