[ceph-users] Cephdeploy support

2020-10-10 Thread Amudhan P
Hi, Future releases of Ceph support cephdeploy or only Cephadm will be the choice. Thanks, Amudhan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Is cephfs multi-volume support stable?

2020-10-10 Thread Patrick Donnelly
On Fri, Oct 9, 2020 at 11:56 PM Alexander E. Patrakov wrote: > > Hello, > > I found that documentation on the Internet on the question whether I > can safely have two instances of cephfs in my cluster is inconsistent. > For the record, I don't use snapshots. > > FOSDEM 19 presentation by Sage

[ceph-users] Q on enabling application on the pool

2020-10-10 Thread Void Star Nill
Hello, What is the necessity for enabling the application on the pool? As per the documentation, we need to enable application before using the pool. However, in my case, I have a single pool running on the cluster used for RBD. I am able to run all RBD operations on the pool even if I dont

[ceph-users] Re: multiple OSD crash, unfound objects

2020-10-10 Thread Andreas John
Hello Mike, do your OSDs go down from time to time? I once has an issue with unrecoverable objects, because I had only n+1 (size 2) redundancy and ceph wasn't able to decide, what's the correct copy of the object. In my case there half-deleted snapshots  in one of the copies. I used

[ceph-users] Re: Monitor recovery

2020-10-10 Thread Martin Verges
Hello Brian, as long as you have at least one working MON, it's kind of easy to recover. Shutdown all MONs, modify the MONMAP by hand, leaving just one of the working MONs and then start it up. After that, redeploy the other mons to have your quorum and redundancy back again. You find more

[ceph-users] Re: How to clear Health Warning status?

2020-10-10 Thread Tecnología CHARNE . NET
Thanks, Anthony, for your quick response. I'll remove the disk and replace it. Javier.- El 10/10/20 a las 00:17, Anthony D'Atri escribió: * Monitors now have a config option ``mon_osd_warn_num_repaired``, 10 by default. If any OSD has repaired more than this many I/O errors in stored

[ceph-users] Possible to disable check: x pool(s) have no replicas configured

2020-10-10 Thread Marc Roos
Is it possible to disable checking on 'x pool(s) have no replicas configured', so I don't have this HEALTH_WARN constantly. Or is there some other disadvantage of keeping some empty 1x replication test pools? ___ ceph-users mailing list --

[ceph-users] Is cephfs multi-volume support stable?

2020-10-10 Thread Alexander E. Patrakov
Hello, I found that documentation on the Internet on the question whether I can safely have two instances of cephfs in my cluster is inconsistent. For the record, I don't use snapshots. FOSDEM 19 presentation by Sage Weil: