[ceph-users] Lumunious 12.2.10 update send to 12.2.11

2019-02-05 Thread Marc Roos
Has some protocol or so changed? I am resizing an rbd device on a luminous 12.2.10 cluster and a 12.2.11 client does not resond (all centos7) 2019-02-05 09:46:27.336885 7f9227fff700 -1 librbd::Operations: update notification timed-out ___

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-02-05 Thread Iain Buclaw
On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote: > > Hi, > > Following the update of one secondary site from 12.2.8 to 12.2.11, the > following warning have come up. > > HEALTH_WARN 1 large omap objects > LARGE_OMAP_OBJECTS 1 large omap objects > 1 large objects found in pool

[ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-02-05 Thread Iain Buclaw
Hi, Following the update of one secondary site from 12.2.8 to 12.2.11, the following warning have come up. HEALTH_WARN 1 large omap objects LARGE_OMAP_OBJECTS 1 large omap objects 1 large objects found in pool '.rgw.buckets.index' Search the cluster log for 'Large omap object found' for

Re: [ceph-users] CephFS MDS journal

2019-02-05 Thread Mahmoud Ismail
On Mon, Feb 4, 2019 at 10:10 PM Gregory Farnum wrote: > > On Mon, Feb 4, 2019 at 8:03 AM Mahmoud Ismail < > mahmoudahmedism...@gmail.com> wrote: > >> On Mon, Feb 4, 2019 at 4:35 PM Gregory Farnum wrote: >> >>> >>> >>> On Mon, Feb 4, 2019 at 7:32 AM Mahmoud Ismail < >>>

Re: [ceph-users] Lumunious 12.2.10 update send to 12.2.11

2019-02-05 Thread Dan van der Ster
No idea, but maybe this commit which landed in v12.2.11 is relevant: commit 187bc76957dcd8a46a839707dea3c26b3285bd8f Author: runsisi Date: Mon Nov 12 20:01:32 2018 +0800 librbd: fix missing unblock_writes if shrink is not allowed Fixes: http://tracker.ceph.com/issues/36778

Re: [ceph-users] ceph osd commit latency increase over time, until restart

2019-02-05 Thread Igor Fedotov
On 2/4/2019 6:40 PM, Alexandre DERUMIER wrote: but I don't see l_bluestore_fragmentation counter. (but I have bluestore_fragmentation_micros) ok, this is the same b.add_u64(l_bluestore_fragmentation, "bluestore_fragmentation_micros", "How fragmented bluestore free space is

[ceph-users] Object Gateway Cloud Sync to S3

2019-02-05 Thread Ryan
I've been trying to configure the cloud sync module to push changes to an Amazon S3 bucket without success. I've configured the module according to the docs with the trivial configuration settings. Is there an error log I should be checking? Is the "radosgw-admin sync status

[ceph-users] Multicast communication compuverde

2019-02-05 Thread Marc Roos
I am still testing with ceph mostly, so my apologies for bringing up something totally useless. But I just had a chat about compuverde storage. They seem to implement multicast in a scale out solution. I was wondering if there is any experience here with compuverde and how it compared to

[ceph-users] upgrading

2019-02-05 Thread solarflow99
Does ceph-ansible support upgrading a cluster to the latest minor versions, (ex. mimic 13.2.2 to 13.2.4) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Need help with upmap feature on luminous

2019-02-05 Thread Kári Bertilsson
Hello I previously enabled upmap and used automatic balancing with "ceph balancer on". I got very good results and OSD's ended up with perfectly distributed pg's. Now after adding several new OSD's, auto balancing does not seem to be working anymore. OSD's have 30-50% usage where previously all

Re: [ceph-users] Object Gateway Cloud Sync to S3

2019-02-05 Thread Ryan
On Tue, Feb 5, 2019 at 3:35 PM Ryan wrote: > I've been trying to configure the cloud sync module to push changes to an > Amazon S3 bucket without success. I've configured the module according to > the docs with the trivial configuration settings. Is there an error log I > should be checking? Is

Re: [ceph-users] Need help with upmap feature on luminous

2019-02-05 Thread Konstantin Shalygin
I added `debug mgr = 4/5` to [global] section in ceph.conf on the active mgr. And restarted mgr service. Is this correct ? This is correct. k ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] upgrading

2019-02-05 Thread Konstantin Shalygin
Does ceph-ansible support upgrading a cluster to the latest minor versions, (ex. mimic 13.2.2 to 13.2.4) All you need for upgrade is `yum upgrade`, then `systemctl restart ceph-mon.target ceph-mgr.target` on mon hosts, then `systemctl restart ceph-osd.target` on osd hosts. That's all.

Re: [ceph-users] Need help with upmap feature on luminous

2019-02-05 Thread Kári Bertilsson
I am testing running manually `ceph osd pg-upmap-items 41.1 106 125` Nothing shows up in logs on neither OSD 106 or 125 and nothing happens. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Need help with upmap feature on luminous

2019-02-05 Thread Konstantin Shalygin
I previously enabled upmap and used automatic balancing with "ceph balancer on". I got very good results and OSD's ended up with perfectly distributed pg's. Now after adding several new OSD's, auto balancing does not seem to be working anymore. OSD's have 30-50% usage where previously all had

Re: [ceph-users] Need help with upmap feature on luminous

2019-02-05 Thread Kári Bertilsson
ceph version 12.2.8-pve1 on proxmox ceph osd df tree @ https://pastebin.com/e68fJ5fM I added `debug mgr = 4/5` to [global] section in ceph.conf on the active mgr. And restarted mgr service. Is this correct ? I noticed some config settings in the mgr logs.. Changed config to use