ay to get the OSD working again? I am thinking about
waiting the backfill/recovery to finish and them upgrade all nodes to
12.2.10 and if the OSD dont come up, recreating the OSD.
Regards,
Cassiano Pilipavicius.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
FIY,
I have updated some osdsĀ from 12.2.6 that was suffering from the CRC
error and the 12.2.7 fixed the issue!
I installed some new osds on 12/07 without being aware from the issue,
and in my small cluestes, I just noticed the problem when I was trying
to copy some RBD images to another
Hi David, for me something similar happened when I've upgraded from jewel
to luminous and I have discovered that the problem is the memory allocator.
I've tried to change to JEMAlloc in jewel to improve performance, and when
upgraded to bluestore in luminous my osds started to crash.
I;ve
:37 PM, Alex Gorbachev <a...@iss-integration.com> wrote:
On Wed, Mar 7, 2018 at 9:43 AM, Cassiano Pilipavicius
<cassi...@tips.com.br> wrote:
Hi all, this issue already have been discussed in older threads and I've
already tried most of the solutions proposed in older threads.
I
Hi all, this issue already have been discussed in older threads and I've
already tried most of the solutions proposed in older threads.
I have a small andĀ old ceph cluster (slarted in hammer and upgraded
until luminous 12.2.2) , connected thru single 1gbe link shared (I know
this is not
I have a small cluster on 12.2.1, used only for storing VHDS on RBD and
it is pretty stable so far. I've upgraded from jewel to luminous and the
only thing that caused me instability right after the upgrade is that I
was using JEMalloc for the OSDs and after converting the OSDs to
bluestore
Hi Oscar, exclusive-locking should not interfere with live-migration. I
have a small virtualization cluster backed by ceph/rbd and I can migrate
all the VMs which RBD image have exclusive-lock enabled without any issue.
Em 11/14/2017 9:47 AM, Oscar Segarra escreveu:
Hi Konstantin,
Thanks a
*last_change 36 flags hashpspool stripe_width 0
Regards
Prabu GJ
On Sun, 12 Nov 2017 19:20:34 +0530 *Cassiano Pilipavicius
<cassi...@tips.com.br>* wrote
I am also not an expert, but it looks like you have big data
volumes on few PGs, from what I've seen, the pg data i
I am also not an expert, but it looks like you have big data volumes on
few PGs, from what I've seen, the pg data is only deleted from the old
OSD when is completed copied to the new osd.
So, if 1 pg have 100G por example, only when it is fully copied to the
new OSD, the space will be
Hello, I have a problem with OSDs crashing after upgrading to
bluestore/luminous, due to the fact that I was using JEMALLOC and it
seems that there is a bug on bluestore osds x jemalloc. Changing to
tcmalloc solved my issues. Dont know if you have the same issue, but in
my environment, the
10 matches
Mail list logo