More follow up because, obviously, this is a weird problem. I was able to
start up a luminous mgr and successfully join my 13.2.4 cluster. I still
can't get a 13.2.4 mgr to join. I still get the same error I've had before.
(See previously in the thread.)
It definitely seems like something is
Thank you Bryan, for the information. We have 816 OSDs of size 2TB each.
The mon store too big popped up when no rebalancing happened in that month.
It is slightly above the 15360 threshold around 15900 or 16100 and stayed
there for more than a week. We ran the "ceph tell mon.[ID] compact" to get
I believe the option you're looking for is mon_data_size_warn. The default is
set to 16106127360.
I've found that sometimes the mons need a little help getting started with
trimming if you just completed a large expansion. Earlier today I had a
cluster where the mon's data directory was over
I have a cluster with over 1900 OSDs running Luminous (12.2.8) that isn't
cleaning up old osdmaps after doing an expansion. This is even after the
cluster became 100% active+clean:
# find /var/lib/ceph/osd/ceph-1754/current/meta -name 'osdmap*' | wc -l
46181
With the osdmaps being over 600KB
Thanks, Greg. This is as I suspected. Ceph is full of subtleties and I wanted
to be sure.
-- aad
>
> The osd_map_cache_size controls the OSD’s cache of maps; the change in 13.2.3
> is to the default for the monitors’.
> On Mon, Jan 7, 2019 at 8:24 AM Anthony D'Atri
The osd_map_cache_size controls the OSD’s cache of maps; the change in
13.2.3 is to the default for the monitors’.
On Mon, Jan 7, 2019 at 8:24 AM Anthony D'Atri wrote:
>
>
> > * The default memory utilization for the mons has been increased
> > somewhat. Rocksdb now uses 512 MB of RAM by
On 1/7/19 3:15 PM, Giovani Rinaldi wrote:
Hello!
I've been wondering if range requests are more efficient than doing
"whole" requests for relatively large objects (100MB-1GB).
More precisely, my doubt is regarding the use of OSD/RGW resources,
that is, does the entire object is retrieved
Hello!
I've been wondering if range requests are more efficient than doing "whole"
requests for relatively large objects (100MB-1GB).
More precisely, my doubt is regarding the use of OSD/RGW resources, that
is, does the entire object is retrieved from the OSD only to be sliced
afterwards? Or only
Hello Mahmoud,
On Fri, Dec 21, 2018 at 7:44 AM Mahmoud Ismail
wrote:
> I'm doing benchmarks for metadata operations on CephFS, HDFS, and HopsFS on
> Google Cloud. In my current setup, i'm using 32 vCPU machines with 29 GB
> memory, and i have 1 MDS, 1 MON and 3 OSDs. The MDS and the MON nodes
Hi,
We have a large Ceph cluster (Hammer version). We recently saw its mon
store growing too big > 15GB on all 3 monitors without any rebalancing
happening for quiet sometime. We have compacted the DB using "#ceph tell
mon.[ID] compact" for now. But is there a way to increase the size of the
mon
I upgraded to 13.2.4 and, unsurprisingly, it did not solve the problem.
ceph-mgr still fails. What else do I need to look at to try to solve this?
Thanks.
On Fri, Jan 4, 2019 at 3:20 PM Randall Smith wrote:
> Some more info that may or may not matter. :-) First off, I am running
> 13.2.3 on
> * The default memory utilization for the mons has been increased
> somewhat. Rocksdb now uses 512 MB of RAM by default, which should
> be sufficient for small to medium-sized clusters; large clusters
> should tune this up. Also, the `mon_osd_cache_size` has been
> increase from 10
Hi,
>>* Ceph v13.2.2 includes a wrong backport, which may cause mds to go into
>>'damaged' state when upgrading Ceph cluster from previous version.
>>The bug is fixed in v13.2.3. If you are already running v13.2.2,
>>upgrading to v13.2.3 does not require special action.
Any special action
I don't think libvirt has any facilities to list the snapshots of an
image for the purposes of display. It appears, after a quick scan of
the libvirt RBD backend [1] that it only internally lists image
snapshots for maintenance reasons.
[1]
How do you configure libvirt so it sees the snapshots already created on
the rbd image it is using for the vm?
I have already a vm running connected to the rbd pool via
protocol='rbd', and rbd snap ls is showing for snapshots.
___
ceph-users
I am having with the change from pg 8 to pg 16
[@c01 ceph]# ceph osd df | egrep '^ID|^19|^20|^21|^30'
ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS
19 ssd 0.48000 1.0 447GiB 161GiB 286GiB 35.91 0.84 35
20 ssd 0.48000 1.0 447GiB 170GiB 277GiB 38.09 0.89 36
Hello,
the errors are not resolved. Here is what I tried so far, without luck:
I have added a sixth monitor (ceph-mon06), then I deleted the first one
(ceph-mon01)
The mon IDs shift back (mon02 was ID1, now is 0 and so on...)
This is the actual monmap:
0: 192.168.50.21:6789/0 mon.ceph-mon02
1:
This is the fourth bugfix release of the Mimic v13.2.x long term stable
release series. This release includes two security fixes atop of v13.2.3
We recommend all users upgrade to this version. If you've already
upgraded to v13.2.3, the same restrictions from v13.2.2->v13.2.3 apply
here as well.
Arun,
This is what i already suggested in my first reply.
Kind regards,
Caspar
Op za 5 jan. 2019 om 06:52 schreef Arun POONIA <
arun.poo...@nuagenetworks.net>:
> Hi Kevin,
>
> You are right. Increasing number of PGs per OSD resolved the issue. I will
> probably add this config in
19 matches
Mail list logo