Re: [ceph-users] MDS is Readonly

2018-05-02 Thread Yan, Zheng
try running "rados -p touch 1002fc5d22d." before mds restart On Thu, May 3, 2018 at 2:31 AM, Pavan, Krish wrote: > > > We have ceph 12.2.4 cephfs with two active MDS server and directory are > pinned to MDS servers. Yesterday MDS server crashed. Once all fuse

[ceph-users] CentOS release 7.4.1708 and selinux-policy-base >= 3.13.1-166.el7_4.9

2018-05-02 Thread ceph . novice
Hi all. We try to setup our first CentOS 7.4.1708 CEPH cluster, based on Luminous 12.2.5. What we get is:   Error: Package: 2:ceph-selinux-12.2.5-0.el7.x86_64 (Ceph-Luminous)    Requires: selinux-policy-base >= 3.13.1-166.el7_4.9 __Host infos__: root> lsb_release -d Description:

Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences

2018-05-02 Thread Alex Gorbachev
Hi Nick, On Tue, May 1, 2018 at 4:50 PM, Nick Fisk wrote: > Hi all, > > > > Slowly getting round to migrating clusters to Bluestore but I am interested > in how people are handling the potential change in write latency coming from > Filestore? Or maybe nobody is really seeing

[ceph-users] MDS is Readonly

2018-05-02 Thread Pavan, Krish
We have ceph 12.2.4 cephfs with two active MDS server and directory are pinned to MDS servers. Yesterday MDS server crashed. Once all fuse clients have unmounted, we bring back MDS online. Both MDS are active now. Once It came back, we started to see one MDS is Readonly. ... 2018-05-01

[ceph-users] Announcing mountpoint, August 27-28, 2018

2018-05-02 Thread Amye Scavarda
Our first mountpoint is coming! Software-defined Storage (SDS) is changing the traditional way we think of storage. Decoupling software from hardware allows you to choose your hardware vendors and provides enterprises with more flexibility. Attend mountpoint on August 27 - 28, 2018 in Vancouver,

Re: [ceph-users] Proper procedure to replace DB/WAL SSD

2018-05-02 Thread Nicolas Huillard
Le dimanche 08 avril 2018 à 20:40 +, Jens-U. Mozdzen a écrit : > sorry for bringing up that old topic again, but we just faced a   > corresponding situation and have successfully tested two migration   > scenarios. Thank you very much for this update, as I needed to do exactly that, due to an

Re: [ceph-users] GDPR encryption at rest

2018-05-02 Thread David Turner
At 'rest' is talking about data on it's own, not being accessed through an application. Encryption at rest is most commonly done by encrypting the block device with something like dmcrypt. It's anything that makes having the physical disk useless without being able to decrypt it. You can also

Re: [ceph-users] GDPR encryption at rest

2018-05-02 Thread Alfredo Deza
On Wed, May 2, 2018 at 11:12 AM, David Turner wrote: > I've heard conflicting opinions if GDPR requires data to be encrypted at > rest, but enough of our customers believe that it is that we're looking at > addressing it in our clusters. I had a couple questions about the

[ceph-users] GDPR encryption at rest

2018-05-02 Thread David Turner
I've heard conflicting opinions if GDPR requires data to be encrypted at rest, but enough of our customers believe that it is that we're looking at addressing it in our clusters. I had a couple questions about the state of encryption in ceph. 1) My experience with encryption in Ceph is dmcrypt,

Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences

2018-05-02 Thread Igor Fedotov
Hi Nick, On 5/1/2018 11:50 PM, Nick Fisk wrote: Hi all, Slowly getting round to migrating clusters to Bluestore but I am interested in how people are handling the potential change in write latency coming from Filestore? Or maybe nobody is really seeing much difference? As we all know, in

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-05-02 Thread Stefan Kooman
Hi, Quoting Stefan Kooman (ste...@bit.nl): > Hi, > > We see the following in the logs after we start a scrub for some osds: > > ceph-osd.2.log:2017-12-14 06:50:47.180344 7f0f47db2700 0 > log_channel(cluster) log [DBG] : 1.2d8 scrub starts > ceph-osd.2.log:2017-12-14 06:50:47.180915