Re: [ceph-users] OSD doesnt start after reboot

2018-05-03 Thread Akshita Parekh
Steps followed during installing ceph- 1) Installing rpms Then the steps given in - http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ , apart from step 2 and 3 Then ceph-deploy osd prepare osd1:/dev/sda1 ceph-deploy osd activate osd1:/dev/sda1 It said conf files were

Re: [ceph-users] OSD doesnt start after reboot

2018-05-03 Thread David Turner
Please keep the mailing list in your responses. What steps did you follow when configuring your osds. On Fri, May 4, 2018, 12:14 AM Akshita Parekh wrote: > Ceph v10.2.0 -jewel , Why ceph disk or ceph volume is required to > configure disks?encryption where? > > On Thu,

Re: [ceph-users] GDPR encryption at rest

2018-05-03 Thread Alfredo Deza
On Thu, May 3, 2018 at 1:22 PM, David Turner wrote: > The process to create an encrypted bluestore OSD is very simple to make them > utilize dmcrypt (literally just add --dmcrypt to the exact same command you > would run normally to create the OSD). The gotcha is that I

Re: [ceph-users] GDPR encryption at rest

2018-05-03 Thread David Turner
The process to create an encrypted bluestore OSD is very simple to make them utilize dmcrypt (literally just add --dmcrypt to the exact same command you would run normally to create the OSD). The gotcha is that I had to find the option by using --help with ceph-volume from the cli. I was unable

Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences

2018-05-03 Thread Nick Fisk
Hi Dan, Quoting Dan van der Ster : Hi Nick, Our latency probe results (4kB rados bench) didn't change noticeably after converting a test cluster from FileStore (sata SSD journal) to BlueStore (sata SSD db). Those 4kB writes take 3-4ms on average from a random VM in our

Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences

2018-05-03 Thread Dan van der Ster
Hi Nick, Our latency probe results (4kB rados bench) didn't change noticeably after converting a test cluster from FileStore (sata SSD journal) to BlueStore (sata SSD db). Those 4kB writes take 3-4ms on average from a random VM in our data centre. (So bluestore DB seems equivalent to FileStore

Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences

2018-05-03 Thread Alex Gorbachev
On Thu, May 3, 2018 at 6:54 AM, Nick Fisk wrote: > -Original Message- > From: Alex Gorbachev > Sent: 02 May 2018 22:05 > To: Nick Fisk > Cc: ceph-users > Subject: Re: [ceph-users] Bluestore on

Re: [ceph-users] OSD doesnt start after reboot

2018-05-03 Thread David Turner
Which version of ceph, filestore or bluestore, did you use ceph-disk, ceph-volume, or something else to configure the osds, did you use lvm, is there encryption or any other later involved? On Thu, May 3, 2018, 6:45 AM Akshita Parekh wrote: > Hi All, > > > after every

Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences

2018-05-03 Thread Nick Fisk
Hi Nick, On 5/1/2018 11:50 PM, Nick Fisk wrote: Hi all, Slowly getting round to migrating clusters to Bluestore but I am interested in how people are handling the potential change in write latency coming from Filestore? Or maybe nobody is really seeing much difference? As we all know, in

Re: [ceph-users] CentOS release 7.4.1708 and selinux-policy-base >= 3.13.1-166.el7_4.9

2018-05-03 Thread ceph . novice
Hi Ruben and community.   Thanks a lot for all the help and hints. Finally I figured out that "base" is also part of i.e. "selinux-policy-minimum". After installing this pkg via "yum install", the usual "ceph installation" continues... Seems like the "ceph packaging" is too much RHEL oriented

Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences

2018-05-03 Thread Nick Fisk
-Original Message- From: Alex Gorbachev Sent: 02 May 2018 22:05 To: Nick Fisk Cc: ceph-users Subject: Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences Hi Nick, On Tue, May 1, 2018 at 4:50 PM,

[ceph-users] OSD doesnt start after reboot

2018-05-03 Thread Akshita Parekh
Hi All, after every reboot the current superblock etc folders get deleted from /var/lib/ceph/osd/ceph-0(1,etc) .I have to prepare and activate osd after every reboot. Any suggestions? ceph.target and ceph-osd are enabled. Thanks in advance! ___

Re: [ceph-users] CentOS release 7.4.1708 and selinux-policy-base >= 3.13.1-166.el7_4.9

2018-05-03 Thread John Hearns
Anton if you still cannot install the ceph RPMs, becuse of that dependency, do as Ruben suggests - install selinux-policy-targeted Then you use the RPM option --nodeps which will ignore the dependency requirements. Do not be afraid to use this option - and do not use it blindly either.

Re: [ceph-users] CentOS release 7.4.1708 and selinux-policy-base >= 3.13.1-166.el7_4.9

2018-05-03 Thread Ruben Kerkhof
On Thu, May 3, 2018 at 1:33 AM, wrote: > > Hi all. Hi Anton, > > We try to setup our first CentOS 7.4.1708 CEPH cluster, based on Luminous > 12.2.5. What we get is: > > > Error: Package: 2:ceph-selinux-12.2.5-0.el7.x86_64 (Ceph-Luminous) >Requires:

Re: [ceph-users] ceph-mgr not able to modify max_misplaced in 12.2.4

2018-05-03 Thread nokia ceph
Hi John Spray, Now I am able to update the max_misplaced parameter successfully and validating it. We are using balancer with mode upmap and it starts redistributing the PGs. We observed like the backfilling wait increases a lot , can we create any plan in balancer to restrict the PG backfilling