Re: [ceph-users] ceph-mgr fails to restart after upgrade to mimic

2019-01-07 Thread Randall Smith
More follow up because, obviously, this is a weird problem. I was able to start up a luminous mgr and successfully join my 13.2.4 cluster. I still can't get a 13.2.4 mgr to join. I still get the same error I've had before. (See previously in the thread.) It definitely seems like something is

Re: [ceph-users] Is it possible to increase Ceph Mon store?

2019-01-07 Thread Pardhiv Karri
Thank you Bryan, for the information. We have 816 OSDs of size 2TB each. The mon store too big popped up when no rebalancing happened in that month. It is slightly above the 15360 threshold around 15900 or 16100 and stayed there for more than a week. We ran the "ceph tell mon.[ID] compact" to get

Re: [ceph-users] Is it possible to increase Ceph Mon store?

2019-01-07 Thread Bryan Stillwell
I believe the option you're looking for is mon_data_size_warn. The default is set to 16106127360. I've found that sometimes the mons need a little help getting started with trimming if you just completed a large expansion. Earlier today I had a cluster where the mon's data directory was over

[ceph-users] osdmaps not being cleaned up in 12.2.8

2019-01-07 Thread Bryan Stillwell
I have a cluster with over 1900 OSDs running Luminous (12.2.8) that isn't cleaning up old osdmaps after doing an expansion. This is even after the cluster became 100% active+clean: # find /var/lib/ceph/osd/ceph-1754/current/meta -name 'osdmap*' | wc -l 46181 With the osdmaps being over 600KB

Re: [ceph-users] Questions re mon_osd_cache_size increase

2019-01-07 Thread Anthony D'Atri
Thanks, Greg. This is as I suspected. Ceph is full of subtleties and I wanted to be sure. -- aad > > The osd_map_cache_size controls the OSD’s cache of maps; the change in 13.2.3 > is to the default for the monitors’. > On Mon, Jan 7, 2019 at 8:24 AM Anthony D'Atri

Re: [ceph-users] Questions re mon_osd_cache_size increase

2019-01-07 Thread Gregory Farnum
The osd_map_cache_size controls the OSD’s cache of maps; the change in 13.2.3 is to the default for the monitors’. On Mon, Jan 7, 2019 at 8:24 AM Anthony D'Atri wrote: > > > > * The default memory utilization for the mons has been increased > > somewhat. Rocksdb now uses 512 MB of RAM by

Re: [ceph-users] rgw/s3: performance of range requests

2019-01-07 Thread Casey Bodley
On 1/7/19 3:15 PM, Giovani Rinaldi wrote: Hello! I've been wondering if range requests are more efficient than doing "whole" requests for relatively large objects (100MB-1GB). More precisely, my doubt is regarding the use of OSD/RGW resources, that is, does the entire object is retrieved

[ceph-users] rgw/s3: performance of range requests

2019-01-07 Thread Giovani Rinaldi
Hello! I've been wondering if range requests are more efficient than doing "whole" requests for relatively large objects (100MB-1GB). More precisely, my doubt is regarding the use of OSD/RGW resources, that is, does the entire object is retrieved from the OSD only to be sliced afterwards? Or only

Re: [ceph-users] CephFS MDS optimal setup on Google Cloud

2019-01-07 Thread Patrick Donnelly
Hello Mahmoud, On Fri, Dec 21, 2018 at 7:44 AM Mahmoud Ismail wrote: > I'm doing benchmarks for metadata operations on CephFS, HDFS, and HopsFS on > Google Cloud. In my current setup, i'm using 32 vCPU machines with 29 GB > memory, and i have 1 MDS, 1 MON and 3 OSDs. The MDS and the MON nodes

[ceph-users] Is it possible to increase Ceph Mon store?

2019-01-07 Thread Pardhiv Karri
Hi, We have a large Ceph cluster (Hammer version). We recently saw its mon store growing too big > 15GB on all 3 monitors without any rebalancing happening for quiet sometime. We have compacted the DB using "#ceph tell mon.[ID] compact" for now. But is there a way to increase the size of the mon

Re: [ceph-users] ceph-mgr fails to restart after upgrade to mimic

2019-01-07 Thread Randall Smith
I upgraded to 13.2.4 and, unsurprisingly, it did not solve the problem. ceph-mgr still fails. What else do I need to look at to try to solve this? Thanks. On Fri, Jan 4, 2019 at 3:20 PM Randall Smith wrote: > Some more info that may or may not matter. :-) First off, I am running > 13.2.3 on

[ceph-users] Questions re mon_osd_cache_size increase

2019-01-07 Thread Anthony D'Atri
> * The default memory utilization for the mons has been increased > somewhat. Rocksdb now uses 512 MB of RAM by default, which should > be sufficient for small to medium-sized clusters; large clusters > should tune this up. Also, the `mon_osd_cache_size` has been > increase from 10

Re: [ceph-users] v13.2.4 Mimic released

2019-01-07 Thread Alexandre DERUMIER
Hi, >>* Ceph v13.2.2 includes a wrong backport, which may cause mds to go into >>'damaged' state when upgrading Ceph cluster from previous version. >>The bug is fixed in v13.2.3. If you are already running v13.2.2, >>upgrading to v13.2.3 does not require special action. Any special action

Re: [ceph-users] Configure libvirt to 'see' already created snapshots of a vm rbd image

2019-01-07 Thread Jason Dillaman
I don't think libvirt has any facilities to list the snapshots of an image for the purposes of display. It appears, after a quick scan of the libvirt RBD backend [1] that it only internally lists image snapshots for maintenance reasons. [1]

[ceph-users] Configure libvirt to 'see' already created snapshots of a vm rbd image

2019-01-07 Thread Marc Roos
How do you configure libvirt so it sees the snapshots already created on the rbd image it is using for the vm? I have already a vm running connected to the rbd pool via protocol='rbd', and rbd snap ls is showing for snapshots. ___ ceph-users

Re: [ceph-users] Balancer=on with crush-compat mode

2019-01-07 Thread Marc Roos
I am having with the change from pg 8 to pg 16 [@c01 ceph]# ceph osd df | egrep '^ID|^19|^20|^21|^30' ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS 19 ssd 0.48000 1.0 447GiB 161GiB 286GiB 35.91 0.84 35 20 ssd 0.48000 1.0 447GiB 170GiB 277GiB 38.09 0.89 36

Re: [ceph-users] ERR scrub mismatch

2019-01-07 Thread Marco Aroldi
Hello, the errors are not resolved. Here is what I tried so far, without luck: I have added a sixth monitor (ceph-mon06), then I deleted the first one (ceph-mon01) The mon IDs shift back (mon02 was ID1, now is 0 and so on...) This is the actual monmap: 0: 192.168.50.21:6789/0 mon.ceph-mon02 1:

[ceph-users] v13.2.4 Mimic released

2019-01-07 Thread Abhishek Lekshmanan
This is the fourth bugfix release of the Mimic v13.2.x long term stable release series. This release includes two security fixes atop of v13.2.3 We recommend all users upgrade to this version. If you've already upgraded to v13.2.3, the same restrictions from v13.2.2->v13.2.3 apply here as well.

Re: [ceph-users] Help Ceph Cluster Down

2019-01-07 Thread Caspar Smit
Arun, This is what i already suggested in my first reply. Kind regards, Caspar Op za 5 jan. 2019 om 06:52 schreef Arun POONIA < arun.poo...@nuagenetworks.net>: > Hi Kevin, > > You are right. Increasing number of PGs per OSD resolved the issue. I will > probably add this config in