Re: [ceph-users] active+recovering+degraded after cluster reboot

2018-12-15 Thread David C
Hi Paul Thanks for the response. Not yet, just being a bit cautious ;) I'll go ahead and do that. Thanks David On Sat, 15 Dec 2018, 23:39 Paul Emmerich Did you unset norecover? > > > Paul > > -- > Paul Emmerich > > Looking for help with your Ceph cluster? Contact us at https://croit.io > >

Re: [ceph-users] active+recovering+degraded after cluster reboot

2018-12-15 Thread Paul Emmerich
Did you unset norecover? Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Sun, Dec 16, 2018 at 12:22 AM David C wrote: > > Hi All > > I have what feels like a bit

Re: [ceph-users] active+recovering+degraded after cluster reboot

2018-12-15 Thread David C
Yep, that cleared it. Sorry for the noise! On Sun, Dec 16, 2018 at 12:16 AM David C wrote: > Hi Paul > > Thanks for the response. Not yet, just being a bit cautious ;) I'll go > ahead and do that. > > Thanks > David > > > On Sat, 15 Dec 2018, 23:39 Paul Emmerich >> Did you unset norecover? >>

[ceph-users] active+recovering+degraded after cluster reboot

2018-12-15 Thread David C
Hi All I have what feels like a bit of a rookie question I shutdown a Luminous 12.2.1 cluster with noout,nobackfill,norecover set Before shutting down, all PGs were active+clean I brought the cluster up, all daemons started and all but 2 PGs are active+clean I have 2 pgs showing:

Re: [ceph-users] cephfs file block size: must it be so big?

2018-12-15 Thread Paul Emmerich
Bryan Henderson : > In some NFS experiments of mine, the blocksize reported by 'stat' appears to > be controlled by the rsize and wsize mount options. Without such options, in > the one case I tried, Linux 4.9, blocksize was 32K. Maybe it's affected by > the server or by the filesystem the NFS

Re: [ceph-users] mirroring global id mismatch

2018-12-15 Thread Jason Dillaman
On Fri, Dec 14, 2018 at 4:27 PM Vikas Rana wrote: > > Hi there, > > We are replicating a RBD image from Primary to DR site using RBD mirroring. > We were using 10.2.10. > > We decided to upgrade the DR site to luminous and upgrade went fine and > mirroring status also was good. > We then