[ceph-users] OSD wont start after moving to a new node with ceph 12.2.10

2018-11-27 Thread Cassiano Pilipavicius
ay to get the OSD working again? I am thinking about waiting the backfill/recovery to finish and them upgrade all nodes to 12.2.10 and if the OSD dont come up, recreating the OSD. Regards, Cassiano Pilipavicius. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] v12.2.7 Luminous released

2018-07-17 Thread Cassiano Pilipavicius
FIY, I have updated some osdsĀ  from 12.2.6 that was suffering from the CRC error and the 12.2.7 fixed the issue! I installed some new osds on 12/07 without being aware from the issue, and in my small cluestes, I just noticed the problem when I was trying to copy some RBD images to another

Re: [ceph-users] Backfilling on Luminous

2018-03-15 Thread Cassiano Pilipavicius
Hi David, for me something similar happened when I've upgraded from jewel to luminous and I have discovered that the problem is the memory allocator. I've tried to change to JEMAlloc in jewel to improve performance, and when upgraded to bluestore in luminous my osds started to crash. I;ve

Re: [ceph-users] improve single job sequencial read performance.

2018-03-08 Thread Cassiano Pilipavicius
:37 PM, Alex Gorbachev <a...@iss-integration.com> wrote: On Wed, Mar 7, 2018 at 9:43 AM, Cassiano Pilipavicius <cassi...@tips.com.br> wrote: Hi all, this issue already have been discussed in older threads and I've already tried most of the solutions proposed in older threads. I

[ceph-users] improve single job sequencial read performance.

2018-03-07 Thread Cassiano Pilipavicius
Hi all, this issue already have been discussed in older threads and I've already tried most of the solutions proposed in older threads. I have a small andĀ  old ceph cluster (slarted in hammer and upgraded until luminous 12.2.2) , connected thru single 1gbe link shared (I know this is not

Re: [ceph-users] Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

2017-11-17 Thread Cassiano Pilipavicius
I have a small cluster on 12.2.1, used only for storing VHDS on RBD and it is pretty stable so far. I've upgraded from jewel to luminous and the only thing that caused me instability right after the upgrade is that I was using JEMalloc for the OSDs and after converting the OSDs to bluestore

Re: [ceph-users] features required for live migration

2017-11-14 Thread Cassiano Pilipavicius
Hi Oscar, exclusive-locking should not interfere with live-migration. I have a small virtualization cluster backed by ceph/rbd and I can migrate all the VMs which RBD image have exclusive-lock enabled without any issue. Em 11/14/2017 9:47 AM, Oscar Segarra escreveu: Hi Konstantin, Thanks a

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread Cassiano Pilipavicius
*last_change 36 flags hashpspool stripe_width 0 Regards Prabu GJ On Sun, 12 Nov 2017 19:20:34 +0530 *Cassiano Pilipavicius <cassi...@tips.com.br>* wrote I am also not an expert, but it looks like you have big data volumes on few PGs, from what I've seen, the pg data i

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread Cassiano Pilipavicius
I am also not an expert, but it looks like you have big data volumes on few PGs, from what I've seen, the pg data is only deleted from the old OSD when is completed copied to the new osd. So, if 1 pg have 100G por example, only when it is fully copied to the new OSD, the space will be

Re: [ceph-users] OSD crashed while reparing inconsistent PG luminous

2017-10-17 Thread Cassiano Pilipavicius
Hello, I have a problem with OSDs crashing after upgrading to bluestore/luminous, due to the fact that I was using JEMALLOC and it seems that there is a bug on bluestore osds x jemalloc. Changing to tcmalloc solved my issues. Dont know if you have the same issue, but in my environment, the