Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-22 Thread Thode Jocelyn
Hi, Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes where they are collocated as they all use the "/etc/sysconfig/ceph" configuration file. Best Jocelyn Thode -Original Message- From: Vasu Kulkarni [mailto:vakul...@redhat.com] Sent: vendredi, 20 juillet 2018

[ceph-users] radosgw: S3 object retention: high usage of default.rgw.log pool

2018-07-22 Thread Konstantin Shalygin
Hi. For some bucket for backup application is applied S3 retention policy, at 04:00 2+days backups will be deleted from bucket. At this time I see very high usage of default.rgw.log pool. Usage log is enabled, ops log is disabled, index pool on NVMe: - https://ibb.co/dozqPJ -

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-22 Thread Konstantin Shalygin
I even have no fancy kernel or device, just real standard Debian. The uptime was 6 days since the upgrade from 12.2.6... Nicolas, you should upgrade your 12.2.6 to 12.2.7 due bugs in this release. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-July/028153.html k

Re: [ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-22 Thread Konstantin Shalygin
Hello Ceph Users, We have added more ssd storage to our ceph cluster last night. We added 4 x 1TB drives and the available space went from 1.6TB to 0.6TB ( in `ceph df` for the SSD pool ). I would assume that the weight needs to be changed but I didn't think I would need to? Should I change

Re: [ceph-users] Error bluestore doesn't support lvm

2018-07-22 Thread Konstantin Shalygin
I am using openstack-ansible with ceph-ansible to deploy my Ceph custer and here is my config in yml file --- osd_objectstore: bluestore osd_scenario: lvm lvm_volumes: - data: /dev/sdb - data: /dev/sdc - data: /dev/sdd - data: /dev/sde This is the error i am getting.. TASK

Re: [ceph-users] Cephfs kernel driver availability

2018-07-22 Thread Bryan Henderson
>Kernel 3.16 is not *the* LTS kernel but *an* LTS kernel. The current LTS >kernel is 4.14 Thanks for clarifying that. I guess I forgot how long I've been trying to get Ceph to work. When I started, 3.16 was the current LTS kernel! Had I known that it's so stable that serious bugs are left in

Re: [ceph-users] Cephfs kernel driver availability

2018-07-22 Thread Paul Emmerich
2018-07-22 22:02 GMT+02:00 Bryan Henderson : > Linux kernel 3.16 (the current long term stable Linux kernel) and so far > > So what are other people using? A less stable kernel? An out-of-tree > driver? > FUSE? Is there a working process for getting known bugs fixed in 3.16? > > Kernel 3.16 is

Re: [ceph-users] Cephfs kernel driver availability

2018-07-22 Thread Jack
Fuse On 07/22/2018 10:02 PM, Bryan Henderson wrote: > Is there some better place to get a filesystem driver for the longterm > stable Linux kernel (3.16) than the regular kernel.org source distribution? > > The reason I ask is that I have been trying to get some clients running > Linux kernel

[ceph-users] Cephfs kernel driver availability

2018-07-22 Thread Bryan Henderson
Is there some better place to get a filesystem driver for the longterm stable Linux kernel (3.16) than the regular kernel.org source distribution? The reason I ask is that I have been trying to get some clients running Linux kernel 3.16 (the current long term stable Linux kernel) and so far I

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-22 Thread Nicolas Huillard
Le dimanche 22 juillet 2018 à 02:44 +0200, Oliver Freyermuth a écrit : > Since all services are running on these machines - are you by any > chance running low on memory?  > Do you have a monitoring of this?  I have Munin monitoring on all hosts, but nothing special to notice, except for a +3°C

Re: [ceph-users] Why lvm is recommended method for bleustore

2018-07-22 Thread Satish Patel
I read that post and that's why I open this thread for few more questions and clearence, When you said OSD doesn't come up what actually that means? After reboot of node or after service restart or installation of new disk? You said we are using manual method what is that? I'm building new

Re: [ceph-users] Why lvm is recommended method for bleustore

2018-07-22 Thread Marc Roos
I don’t think it will get any more basic than that. Or maybe this? If the doctor diagnoses you, you can either accept this, get 2nd opinion, or study medicine to verify it. In short lvm has been introduced to solve some issues of related to starting osd's (which I did not have, probably

Re: [ceph-users] Default erasure code profile and sustaining loss of one host containing 4 OSDs

2018-07-22 Thread Christian Wuerdig
Generally the recommendation is: if your redundancy is X you should have at least X+1 entities in your failure domain to allow ceph to automatically self-heal Given your setup of 6 severs and failure domain host means you should select k+m=5 at most. So 3+2 should make for a good profile in your