Re: [ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-21 Thread Glen Baars
Thanks for the reply! - it ended being that the HDD pool in this server is larger than the other servers. This increases the server's weight and therefore the SSD pool in this server is affected. I will add more SSDs to this server to keep the ratio of HDDs to SSDs the same across all hosts.

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-21 Thread Oliver Freyermuth
Since all services are running on these machines - are you by any chance running low on memory? Do you have a monitoring of this? We observe some strange issues with our servers if they run for a long while, and with high memory pressure (more memory is ordered...). Then, it seems our

Re: [ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-21 Thread Linh Vu
Something funny going on with your new disks: 138 ssd 0.90970 1.0 931G 820G 111G 88.08 2.71 216 Added 139 ssd 0.90970 1.0 931G 771G 159G 82.85 2.55 207 Added 140 ssd 0.90970 1.0 931G 709G 222G 76.12 2.34 197 Added 141 ssd 0.90970 1.0 931G 664G 267G 71.31

Re: [ceph-users] bluestore lvm scenario confusion

2018-07-21 Thread Gilles Mocellin
Le samedi 21 juillet 2018, 15:56:31 CEST Satish Patel a écrit : > I am trying to deploy ceph-ansible with lvm osd scenario and reading > at http://docs.ceph.com/ceph-ansible/master/osds/scenarios.html > > I have all SSD disk and i don't have separate journal, my plan was > keep WAL/DB on same

[ceph-users] Why lvm is recommended method for bleustore

2018-07-21 Thread Satish Patel
Folks, I think i am going to boil ocean here, I google a lot about this topic why lvm is recommended method for bluestore, but didn't find any good and detail explanation, not even in Ceph official website. Can someone explain here in basic language because i am no way expert so just want to

Re: [ceph-users] JBOD question

2018-07-21 Thread Willem Jan Withagen
On 21/07/2018 01:45, Oliver Freyermuth wrote: Hi Satish, that really completely depends on your controller. This is what I get on an older AMCC 9550 controller. Note that the disk type is set to JBOD. But the disk descriptors are hidden. And you'll never know what more is not done right.

Re: [ceph-users] JBOD question

2018-07-21 Thread Alex Gorbachev
On Fri, Jul 20, 2018 at 4:01 PM, Satish Patel wrote: > Folks, > > I never used JBOD mode before and now i am planning so i have stupid > question if i switch RAID controller to JBOD mode in that case how > does my OS disk will get mirror? > > Do i need to use software raid for OS disk when i use

Re: [ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-21 Thread Glen Baars
Hello Shawn, Thanks for the info, I had not considered that each host has a weight also. By default it seems to be the overall size of all the disks in that system. The systems with the SSDs that are getting full are due to the additional HDD capacity on that system making that host weight

[ceph-users] bluestore lvm scenario confusion

2018-07-21 Thread Satish Patel
I am trying to deploy ceph-ansible with lvm osd scenario and reading at http://docs.ceph.com/ceph-ansible/master/osds/scenarios.html I have all SSD disk and i don't have separate journal, my plan was keep WAL/DB on same disk because all SSD and same speed. ceph-ansible doesn't create lvm so i

Re: [ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-21 Thread Shawn Iverson
Glen, Correction...looked at the wrong column for weights, my bad... I was looking at the wrong column for weight. You have varying weights, but the process is still the same. Balance your buckets (hosts) in your crush map, and balance your osds in each bucket (host). On Sat, Jul 21, 2018 at

Re: [ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-21 Thread Shawn Iverson
Glen, It appears you have 447G, 931G, and 558G disks in your cluster, all with a weight of 1.0. This means that although the new disks are bigger, they are not going to be utilized by pgs any more than any other disk. I would suggest reweighting your other disks (they are smaller), so that you

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-21 Thread Nicolas Huillard
I forgot to mention that this server, along with all the other Ceph servers in my cluster, do not run anything else than Ceph, and each run  all the Ceph daemons (mon, mgr, mds, 2×osd). Le samedi 21 juillet 2018 à 10:31 +0200, Nicolas Huillard a écrit : > Hi all, > > One of my server silently

Re: [ceph-users] Issues/questions: ceph df (luminous 12.2.7)

2018-07-21 Thread Sébastien VIGNERON
Hi, > Le 21 juil. 2018 à 11:52, Marc Roos a écrit : > > > > 1. Why is ceph df not always showing 'units' G M k Ceph default plain output show human readable values. > > [@c01 ~]# ceph df > GLOBAL: >SIZE AVAIL RAW USED %RAW USED >81448G 31922G 49526G

[ceph-users] Issues/questions: ceph df (luminous 12.2.7)

2018-07-21 Thread Marc Roos
1. Why is ceph df not always showing 'units' G M k [@c01 ~]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 81448G 31922G 49526G 60.81 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS iscsi-images

[ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-21 Thread Nicolas Huillard
Hi all, One of my server silently shutdown last night, with no explanation whatsoever in any logs. According to the existing logs, the shutdown (without reboot) happened between 03:58:20.061452 (last timestamp from /var/log/ceph/ceph-mgr.oxygene.log) and 03:59:01.515308 (new MON election called,