Re: [ceph-users] ceph-create-keys loops

2019-05-06 Thread ST Wong (ITSC)
Some more information: Rpm installed on MONS: python-cephfs-13.2.5-0.el7.x86_64 ceph-base-13.2.5-0.el7.x86_64 libcephfs2-13.2.5-0.el7.x86_64 ceph-common-13.2.5-0.el7.x86_64 ceph-selinux-13.2.5-0.el7.x86_64 ceph-mon-13.2.5-0.el7.x86_64 Doing a mon_status gives following. Only the local host

Re: [ceph-users] Prioritized pool recovery

2019-05-06 Thread Kyle Brantley
On 5/6/2019 6:37 PM, Gregory Farnum wrote: Hmm, I didn't know we had this functionality before. It looks to be changing quite a lot at the moment, so be aware this will likely require reconfiguring later. Good to know, and not a problem. In any case, I'd assume it won't change substantially

Re: [ceph-users] Prioritized pool recovery

2019-05-06 Thread Gregory Farnum
Hmm, I didn't know we had this functionality before. It looks to be changing quite a lot at the moment, so be aware this will likely require reconfiguring later. On Sun, May 5, 2019 at 10:40 AM Kyle Brantley wrote: > > I've been running luminous / ceph-12.2.11-0.el7.x86_64 on CentOS 7 for about

Re: [ceph-users] CRUSH rule device classes mystery

2019-05-06 Thread Gregory Farnum
What's the output of "ceph -s" and "ceph osd tree"? On Fri, May 3, 2019 at 8:58 AM Stefan Kooman wrote: > > Hi List, > > I'm playing around with CRUSH rules and device classes and I'm puzzled > if it's working correctly. Platform specifics: Ubuntu Bionic with Ceph 14.2.1 > > I created two new

Re: [ceph-users] ceph-create-keys loops

2019-05-06 Thread ST Wong (ITSC)
yes, we’re using 3.2 stable, on RHEL 7. Thanks. From: solarflow99 Sent: Tuesday, May 7, 2019 1:40 AM To: ST Wong (ITSC) Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph-create-keys loops you mention the version of ansible, that is right. How about the branch of ceph-ansible?

Re: [ceph-users] Nautilus (14.2.0) OSDs crashing at startup after removing a pool containing a PG with an unrepairable error

2019-05-06 Thread Gregory Farnum
Thanks guys! Sage found the issue preventing PG removal from working right and that is going through testing now and should make the next Nautilus release. https://github.com/ceph/ceph/pull/27929 Apparently the device health metrics is doing something slightly naughty, so hopefully that's easy

[ceph-users] EPEL packages issue

2019-05-06 Thread Mohammad Almodallal
Hello, I need to install Ceph nautilus from local repository, I did download all the packages from Ceph site and created a local repository on the servers also servers don't have internet access, but whenever I try to install Ceph it tries to install the EPEL release then the installation was

Re: [ceph-users] Ceph OSD fails to start : direct_read_unaligned error No data available

2019-05-06 Thread Marc Roos
The reason why you moved to ceph storage, is that you do not want to do such things. Remove the drive, and let ceph recover. On May 6, 2019 11:06 PM, Florent B wrote: > > Hi, > > It seems that OSD disk is dead (hardware problem), badblocks command > returns a lot of badblocks. > > Is there

Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-06 Thread Marc Roos
But what happens if the guest os has trim enabled and qemu did not have the discard option set. Should there be done some fsck to correct this? (Sorry is getting a bit off topic here.) -Original Message- From: Jason Dillaman [mailto:jdill...@redhat.com] Sent: woensdag 1 mei 2019

Re: [ceph-users] ceph-create-keys loops

2019-05-06 Thread solarflow99
you mention the version of ansible, that is right. How about the branch of ceph-ansible? should be 3.2-stable, what OS? I haven't come across this problem myself, a lot of other ones. On Mon, May 6, 2019 at 3:47 AM ST Wong (ITSC) wrote: > Hi all, > > > > I’ve problem in deploying mimic

[ceph-users] Degraded pgs during async randwrites

2019-05-06 Thread Nathan Fish
Hello all, I'm testing out a new cluster that we hope to put into production soon. Performance has overall been great, but there's one benchmark that not only stresses the cluster, but causes it to degrade - async randwrites. The benchmark: # The file was previously laid out with dd'd random data

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-06 Thread Igor Fedotov
Hi Kenneth, mimic 13.2.5 has previous version of bitmap allocator which isn't recommended to use. Please revert. New bitmap allocator is will be available since 13.2.6. Thanks, Igor On 5/6/2019 4:19 PM, Kenneth Waegeman wrote: Hi all, I am also switching osds to the new bitmap

Re: [ceph-users] Unexplainable high memory usage OSD with BlueStore

2019-05-06 Thread Kenneth Waegeman
Hi all, I am also switching osds to the new bitmap allocater on 13.2.5. That went quite fluently for now, except for one OSD that keeps segfaulting when I enable the bitmap allocator. Each time I disable bitmap allocater on that again, osd is ok again. Segfault error of the OSD: --- begin

Re: [ceph-users] Ceph Multi Mds Trim Log Slow

2019-05-06 Thread Lars Täuber
I restarted the mds process which was in "up:stopping" state. Since then there are no trimmings behind any more. All (sub)directories are accessible as normal again. It seems there are stability issues with snapshots in a multi-mds cephfs on nautilus. This has already been suspected here:

[ceph-users] ceph-create-keys loops

2019-05-06 Thread ST Wong (ITSC)
Hi all, I've problem in deploying mimic using ceph-ansible at following step: -- cut here --- TASK [ceph-mon : collect admin and bootstrap keys] * Monday 06 May 2019 17:01:23 +0800 (0:00:00.854) 0:05:38.899 fatal: [cphmon3a]:

Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-06 Thread Janne Johansson
Den mån 6 maj 2019 kl 10:03 skrev Marc Roos : > > Yes but those 'changes' can be relayed via the kernel rbd driver not? > Besides I don't think you can move a rbd block device being used to a > different pool anyway. > > No, but you can move the whole pool, which takes all RBD images with it. >

Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-06 Thread Marc Roos
Yes but those 'changes' can be relayed via the kernel rbd driver not? Besides I don't think you can move a rbd block device being used to a different pool anyway. On the manual page [0] there is nothing mentioned about configuration settings needed for rbd use. Nor for ssd. They are also