Re: [ceph-users] v12.2.8 Luminous released

2018-09-20 Thread Konstantin Shalygin
12.2.8 improves the deep scrub code to automatically repair these inconsistencies. Once the entire cluster has been upgraded and then fully deep scrubbed, and all such inconsistencies are resolved; it will be safe to disable the `osd distrust data digest = true` workaround option. Just for

Re: [ceph-users] v12.2.8 Luminous released

2018-09-11 Thread Dan van der Ster
This is a friendly reminder that multi-active MDS clusters must be reduced to only 1 active during upgrades [1]. In the case of v12.2.8, the CEPH_MDS_PROTOCOL version has changed so if you try to upgrade one MDS it will get stuck in the resolve state, logging: conn(0x55e3d9671000 :-1

Re: [ceph-users] v12.2.8 Luminous released

2018-09-06 Thread Igor Fedotov
Hi Adrian, yes, this issue has been fixed by https://github.com/ceph/ceph/pull/22909 Thanks, Igor On 9/6/2018 8:10 AM, Adrian Saul wrote: Can I confirm if this bluestore compression assert issue is resolved in 12.2.8? https://tracker.ceph.com/issues/23540 I notice that it has a backport

Re: [ceph-users] v12.2.8 Luminous released

2018-09-06 Thread Abhishek Lekshmanan
Adrian Saul writes: > Can I confirm if this bluestore compression assert issue is resolved in > 12.2.8? > > https://tracker.ceph.com/issues/23540 The PR itself in the backport issue is in the release notes, ie. pr#22909, which references two tracker issues. Unfortunately,the script that

Re: [ceph-users] v12.2.8 Luminous released

2018-09-05 Thread Adrian Saul
Can I confirm if this bluestore compression assert issue is resolved in 12.2.8? https://tracker.ceph.com/issues/23540 I notice that it has a backport that is listed against 12.2.8 but there is no mention of that issue or backport listed in the release notes. > -Original Message- >

Re: [ceph-users] v12.2.8 Luminous released

2018-09-05 Thread David Turner
I upgraded my home cephfs/rbd cluster to 12.2.8 during an OS upgrade to Ubuntu 18.04 and ProxMox 5.1 (Stretch). Everything is running well so far. On Wed, Sep 5, 2018 at 10:21 AM Dan van der Ster wrote: > Thanks for the release! > > We've updated some test clusters (rbd, cephfs) and it looks

Re: [ceph-users] v12.2.8 Luminous released

2018-09-05 Thread Dan van der Ster
Thanks for the release! We've updated some test clusters (rbd, cephfs) and it looks good so far. -- dan On Tue, Sep 4, 2018 at 6:30 PM Abhishek Lekshmanan wrote: > > > We're glad to announce the next point release in the Luminous v12.2.X > stable release series. This release contains a range

[ceph-users] v12.2.8 Luminous released

2018-09-04 Thread Abhishek Lekshmanan
We're glad to announce the next point release in the Luminous v12.2.X stable release series. This release contains a range of bugfixes and stability improvements across all the components of ceph. For detailed release notes with links to tracker issues and pull requests, refer to the blog post at