[ceph-users] Glance client and RBD export checksum mismatch

2019-04-09 Thread Brayan Perera
Dear All, Ceph Version : 12.2.5-2.ge988fb6.el7 We are facing an issue on glance which have backend set to ceph, when we try to create an instance or volume out of an image, it throws checksum error. When we use rbd export and use md5sum, value is matching with glance checksum. When we use

Re: [ceph-users] how to trigger offline filestore merge

2019-04-09 Thread Dan van der Ster
Hi again, Thanks to a hint from another user I seem to have gotten past this. The trick was to restart the osds with a positive merge threshold (10) then cycle through rados bench several hundred times, e.g. while true ; do rados bench -p default.rgw.buckets.index 10 write -b 4096 -t 128;

Re: [ceph-users] NFS-Ganesha Mounts as a Read-Only Filesystem

2019-04-09 Thread Paul Emmerich
Looks like you are trying to write to the pseudo-root, mount /cephfs instead of /. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Sat, Apr 6, 2019 at 1:07 PM

Re: [ceph-users] Inconsistent PGs caused by omap_digest mismatch

2019-04-09 Thread Bryan Stillwell
> On Apr 8, 2019, at 5:42 PM, Bryan Stillwell wrote: > > >> On Apr 8, 2019, at 4:38 PM, Gregory Farnum wrote: >> >> On Mon, Apr 8, 2019 at 3:19 PM Bryan Stillwell >> wrote: >>> >>> There doesn't appear to be any correlation between the OSDs which would >>> point to a hardware issue, and

[ceph-users] showing active config settings

2019-04-09 Thread solarflow99
I noticed when changing some settings, they appear to stay the same, for example when trying to set this higher: ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4' It gives the usual warning about may need to restart, but it still has the old value: # ceph --show-config | grep

Re: [ceph-users] problems with pg down

2019-04-09 Thread ceph
Hi Fabio, Did you resolve the issue? A bit late, i know, but did you tried to restart OSD 14? If 102 and 121 are fine i would also try to crush reweight 14 to 0. Greetings Mehmet Am 10. März 2019 19:26:57 MEZ schrieb Fabio Abreu : >Hi Darius, > >Thanks for your reply ! > >This happening

Re: [ceph-users] Try to log the IP in the header X-Forwarded-For with radosgw behind haproxy

2019-04-09 Thread Francois Lafont
On 4/9/19 12:43 PM, Francois Lafont wrote: 2. In my Docker container context, is it possible to put the logs above in the file "/var/log/syslog" of my host, in other words is it possible to make sure to log this in stdout of the daemon "radosgw"? In brief, is it possible log "operations" in

Re: [ceph-users] osd_memory_target exceeding on Luminous OSD BlueStore

2019-04-09 Thread Olivier Bonvalet
Good point, thanks ! By making memory pressure (by playing with vm.min_free_kbytes), memory is freed by the kernel. So I think I essentially need to update monitoring rules, to avoid false positive. Thanks, I continue to read your resources. Le mardi 09 avril 2019 à 09:30 -0500, Mark Nelson a

[ceph-users] how to trigger offline filestore merge

2019-04-09 Thread Dan van der Ster
Hi all, We have a slight issue while trying to migrate a pool from filestore to bluestore. This pool used to have 20 million objects in filestore -- it now has 50,000. During its life, the filestore pgs were internally split several times, but never merged. Now the pg _head dirs have mostly

Re: [ceph-users] Remove RBD mirror?

2019-04-09 Thread Jason Dillaman
Can you pastebin the results from running the following on your backup site rbd-mirror daemon node? ceph --admin-socket /path/to/asok config set debug_rbd_mirror 15 ceph --admin-socket /path/to/asok rbd mirror restart nova wait a minute to let some logs accumulate ... ceph --admin-socket

Re: [ceph-users] Remove RBD mirror?

2019-04-09 Thread Magnus Grönlund
Den tis 9 apr. 2019 kl 17:48 skrev Jason Dillaman : > Any chance your rbd-mirror daemon has the admin sockets available > (defaults to /var/run/ceph/cephdr-clientasok)? If > so, you can run "ceph --admin-daemon /path/to/asok rbd mirror status". > { "pool_replayers": [ {

Re: [ceph-users] Remove RBD mirror?

2019-04-09 Thread Jason Dillaman
Any chance your rbd-mirror daemon has the admin sockets available (defaults to /var/run/ceph/cephdr-clientasok)? If so, you can run "ceph --admin-daemon /path/to/asok rbd mirror status". On Tue, Apr 9, 2019 at 11:26 AM Magnus Grönlund wrote: > > > > Den tis 9 apr. 2019 kl 17:14 skrev Jason

Re: [ceph-users] Remove RBD mirror?

2019-04-09 Thread Magnus Grönlund
Den tis 9 apr. 2019 kl 17:14 skrev Jason Dillaman : > On Tue, Apr 9, 2019 at 11:08 AM Magnus Grönlund > wrote: > > > > >On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund > wrote: > > >> > > >> Hi, > > >> We have configured one-way replication of pools between a production > cluster and a backup

Re: [ceph-users] Remove RBD mirror?

2019-04-09 Thread Jason Dillaman
On Tue, Apr 9, 2019 at 11:08 AM Magnus Grönlund wrote: > > >On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund wrote: > >> > >> Hi, > >> We have configured one-way replication of pools between a production > >> cluster and a backup cluster. But unfortunately the rbd-mirror or the > >> backup

Re: [ceph-users] Remove RBD mirror?

2019-04-09 Thread Magnus Grönlund
>On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund wrote: >> >> Hi, >> We have configured one-way replication of pools between a production cluster and a backup cluster. But unfortunately the rbd-mirror or the backup cluster is unable to keep up with the production cluster so the replication fails

Re: [ceph-users] How to tune Ceph RBD mirroring parameters to speed up replication

2019-04-09 Thread Jason Dillaman
On Thu, Apr 4, 2019 at 6:27 AM huxia...@horebdata.cn wrote: > > thanks a lot, Jason. > > how much performance loss should i expect by enabling rbd mirroring? I really > need to minimize any performance impact while using this disaster recovery > feature. Will a dedicated journal on Intel Optane

Re: [ceph-users] Remove RBD mirror?

2019-04-09 Thread Jason Dillaman
On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund wrote: > > Hi, > We have configured one-way replication of pools between a production cluster > and a backup cluster. But unfortunately the rbd-mirror or the backup cluster > is unable to keep up with the production cluster so the replication

[ceph-users] Remove RBD mirror?

2019-04-09 Thread Magnus Grönlund
Hi, We have configured one-way replication of pools between a production cluster and a backup cluster. But unfortunately the rbd-mirror or the backup cluster is unable to keep up with the production cluster so the replication fails to reach replaying state. And the journals on the rbd volumes keep

Re: [ceph-users] BADAUTHORIZER in Nautilus

2019-04-09 Thread Shawn Edwards
Update: I think we have a work-around, but no root cause yet. What is working is removing the 'v2' bits from the ceph.conf file across the cluster, and turning off all cephx authentication. Now everything seems to be talking correctly other than some odd metrics around the edges. Here's my

Re: [ceph-users] osd_memory_target exceeding on Luminous OSD BlueStore

2019-04-09 Thread Mark Nelson
My understanding is that basically the kernel is either unable or uninterested (maybe due to lack of memory pressure?) in reclaiming the memory .  It's possible you might have better behavior if you set /sys/kernel/mm/khugepaged/max_ptes_none to a low value (maybe 0) or maybe disable

Re: [ceph-users] osd_memory_target exceeding on Luminous OSD BlueStore

2019-04-09 Thread Olivier Bonvalet
Well, Dan seems to be right : _tune_cache_size target: 4294967296 heap: 6514409472 unmapped: 2267537408 mapped: 4246872064 old cache_size: 2845396873 new cache size: 2845397085 So we have 6GB in heap, but "only" 4GB mapped. But "ceph tell osd.* heap release"

Re: [ceph-users] osd_memory_target exceeding on Luminous OSD BlueStore

2019-04-09 Thread Olivier Bonvalet
Thanks for the advice, we are using Debian 9 (stretch), with a custom Linux kernel 4.14. But "heap release" didn't help. Le lundi 08 avril 2019 à 12:18 +0200, Dan van der Ster a écrit : > Which OS are you using? > With CentOS we find that the heap is not always automatically > released. (You

Re: [ceph-users] bluefs-bdev-expand experience

2019-04-09 Thread Yury Shevchuk
Igor, thank you, Round 2 is explained now. Main aka block aka slow device cannot be expanded in Luminus, this functionality will be available after upgrade to Nautilus. Wal and db devices can be expanded in Luminous. Now I have recreated osd2 once again to get rid of the paradoxical cepf osd df

Re: [ceph-users] Try to log the IP in the header X-Forwarded-For with radosgw behind haproxy

2019-04-09 Thread Francois Lafont
Hi, On 4/9/19 5:02 AM, Pavan Rallabhandi wrote: Refer "rgw log http headers" under http://docs.ceph.com/docs/nautilus/radosgw/config-ref/ Or even better in the code https://github.com/ceph/ceph/pull/7639 Ok, thx for your help Pavan. I have progressed but I have already some problems.