Hi ceph-users,
Im bit stuck with librados, partcularly rados_cache_pin finction. For
some reason it returns "Invalid argument", error code 22. Can't find
what im doing wrong. in rados_cache_pin im using the same ioctx and
obj name like i use with rados_write - which works just fine. Any hel
We have a 5 node cluster, all monitors, installed with cephadm. recently, the
hosts needed to be rebooted for upgrades, but as we rebooted them, hosts fail
their cephadm check. as you can see ceph1 is in quorum and is the host the
command is run from. following is the output of ceph -s and ceph
Hi Dan,
Thanks for this info, will do this upgrades.
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] bug of the year (with compressed omap and lz
1.7(?))
Hi Marc,
As far as I know, the osdmap corruption occurs with this osd config:
bluestore_compression_mode=aggres
Good question!
Did you already observe some performance impact of very large PGs?
Which PG locks are you speaking of? Is there perhaps some way to
improve this with the op queue shards?
(I'm cc'ing Mark in case this is something that the performance team
has already looked into).
With a 20TB osd,
Hi Marc,
As far as I know, the osdmap corruption occurs with this osd config:
bluestore_compression_mode=aggressive
bluestore_compression_algorithm=lz4
(My understanding is that if you don't have the above settings, but
use pool-specific compression settings instead, then the osdmaps are
n
One factor is RAM usage, that was IIRC the motivation for the lowering of the
recommendation of the ratio from 200 to 100. Memory needs also increase during
recovery and backfill.
When calculating, be sure to consider repllicas.
ratio = (pgp_num x replication) / num_osds
As HDDs grow the inte
Dear Ceph folks,
As the capacity of one HDD (OSD) is growing bigger and bigger, e.g. from 6TB up
to 18TB or even more, should the number of PG per OSD increase as well, e.g.
for 200 to 800. As far as i know, the capacity of each PG should be set smaller
for performance reasons due to the existe
I am still running 14.2.9 with lz4-1.7.5-3. Will I run into this bug
enabling compression on a pool with:
ceph osd pool set POOL_NAME compression_algorithm COMPRESSION_ALGORITHM
ceph osd pool set POOL_NAME compression_mode COMPRESSION_MODE
___
ceph-u
Students are often faced with the issue of preparing their assignment during
their studies. They often seek for help from professional service providers and
one such service provider is Student Life Saviour that offers assistance at:
https://studentlifesaviour.com/sg
osd_compact_on_start is not in osd config reference.[1] thread discuss the
"slow rados ls" solution. Compacting with "ceph-kvstore-tool bluestore-kv"
on failed osd is done but no change. After export-remove all pgs on failed
osd, failed osd started successfully! Crashing ceph-osd node is caused by
Hi Patrick,
thanks for the reply
On Fri, 2020-09-04 at 10:25 -0700, Patrick Donnelly wrote:
> > We then started using the cephfs (we keep VM images on the cephfs).
> > The
> > MDS were showing an error. I restarted the MDS but they didn't come
> > back.We then followed the instructions here:
> > h
11 matches
Mail list logo