[ceph-users] Re: Small HDD cluster, switch from Bluestore to Filestore

2019-08-15 Thread Robert LeBlanc
The overall latency in the cluster may be too high, but it was worth a shot. I've noticed that these settings really reduces the latency distribution so that it becomes more predictable and prevented some single VMs from hanging for long periods of time while others worked just fine usually when

[ceph-users] Re: Small HDD cluster, switch from Bluestore to Filestore

2019-08-15 Thread Rich Bade
Unfortunately the scsi reset on this vm happened again last night so this hasn't resolved the issue. Thanks for the suggestion though. Rich ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Mapped rbd is very slow

2019-08-15 Thread Vitaliy Filippov
rbd -p kube bench kube/bench --io-type write --io-threads 1 --io-total 10G --io-pattern rand elapsed:14 ops: 262144 ops/sec: 17818.16 bytes/sec: 72983201.32 It's a totally unreal number. Something is wrong with the test. Test it with `fio` please: fio -ioengine=rbd -name=test

[ceph-users] Re: Upgrade luminous -> nautilus , any pointers?

2019-08-15 Thread Marc Roos
I have a fairly dormant ceph luminous cluster on centos7 with stock kernel, and thought about upgrading it before putting it to more use. I can remember some page on the ceph website that had specific instructions mentioning upgrading from luminous. But I can't find it anymore, this page[0]

[ceph-users] Re: Upgrad luminous -> mimic , any pointers?

2019-08-15 Thread Marc Roos
Pfff, you are right, I don't even know which one is the newest latest, indeed Nautilus -Original Message- Subject: Re: [ceph-users] Upgrad luminous -> mimic , any pointers? Why would you go to Mimic instead of Nautilus? > > > > I have a fairly dormant ceph luminous cluster on

[ceph-users] Re: Mgr stability

2019-08-15 Thread Reed Dier
I had already disabled prometheus plugin (again, only using for the rbd stats), but will also remove the rbd pool from the rbd_support module, as well as disable the rbd_support module. It seems slightly more stable so far, but still not rock solid as it was before. Thanks, Reed > On Aug 15,

[ceph-users] How to tune the ceph balancer in nautilus

2019-08-15 Thread Manuel Lausch
Hi, I playing around with the ceph balancer in luminous and nautilus. While tuning some balancer settings I experienced some problems with nautilus. In Luminous I cold configure the max_misplaced value like this: ceph config-key set mgr/balancer/max_misplaced 0.002 With the same command in

[ceph-users] Re: Mgr stability

2019-08-15 Thread Mykola Golub
On Wed, Aug 14, 2019 at 12:12:36PM -0500, Reed Dier wrote: > My main metrics source is the influx plugin, but I enabled the > prometheus plugin to get access to the per-rbd image metrics. I may > disable prometheus and see if that yields better stability, until > possibly the influx plugin gets

[ceph-users] Re: Failing heartbeats when no backfill is running

2019-08-15 Thread Lorenz Kiefner
Oh no, it's not that bad. It's $ ping -s 65000 dest.inati.on on a VPN connection that has a MTU of 1300 via IPv6. So I suspect that I only get an answer, when all 51 fragments get fully returned. It's clear that big packets with lots of fragments are more affected by packet loss than 64 byte

[ceph-users] Re: "Signature check failed" from certain clients

2019-08-15 Thread Hector Martin
On 15/08/2019 11.42, Peter Sarossy wrote: hey folks, I spent the past 2 hours digging through the forums and similar sources with no luck.. I use ceph storage for docker stacks, and this issue has taken the whole thing down as I cannot mount their volumes back... Starting yesterday, some of