Re: [ceph-users] OMAP warning ( again )

2018-09-01 Thread Brent Kennedy
I found this discussion between Wido and Florian ( two really good ceph folks ), but it doesn’t seem to deep dive into sharding ( something I would like to know more about ). https://www.spinics.net/lists/ceph-users/msg24420.html None of my clusters are using multi-site sync ( was thinking

Re: [ceph-users] cephfs speed

2018-09-01 Thread Joe Comeau
Yes I was referring to windows explorer copies as that is what users typically use but also with windows robocopy and it set to 32 threads the difference is we may go from a peak of 300MB/s to a more normal 100MB/s to a stall at 0 to 30MB/s about every 7-8 seconds it stalls to 0 MB/s being

Re: [ceph-users] OMAP warning ( again )

2018-09-01 Thread Matt Benjamin
Apparently it is the case presently that when dynamic resharding completes, the retired bucket index shards need to be manually deleted. We plan to change this, but it's worth checking for such objects. Alternatively, though, look for other large omap "objects", e.g., sync-error.log, if you are

Re: [ceph-users] OMAP warning ( again )

2018-09-01 Thread Brent Kennedy
I didn’t want to attempt anything until I had more information. I have been tied up with secondary stuff, so we are just monitoring for now. The only thing I could find was a setting to make the warning go away, but that doesn’t seem like a good idea as it was identified as an issue that

[ceph-users] Slow requests from bluestore osds

2018-09-01 Thread Brett Chancellor
Hi Cephers, I am in the process of upgrading a cluster from Filestore to bluestore, but I'm concerned about frequent warnings popping up against the new bluestore devices. I'm frequently seeing messages like this, although the specific osd changes, it's always one of the few hosts I've converted

Re: [ceph-users] Adding node efficient data move.

2018-09-01 Thread David Turner
I've heard you can do that with the manager service for balancing your cluster. You can set the maximum amount of misplaced objects you want and the service will add in the new node until it's balanced without moving more data than your settings specify at any time. On Sat, Sep 1, 2018, 6:35 AM

Re: [ceph-users] MDS does not always failover to hot standby on reboot

2018-09-01 Thread Bryan Henderson
> If the active MDS is connected to a monitor and they fail at the same time, > the monitors can't replace the mds until they've been through their own > election and a full mds timeout window. So how long are we talking? -- Bryan Henderson San Jose, California

[ceph-users] Adding node efficient data move.

2018-09-01 Thread Marc Roos
When adding a node and I increment the crush weight like this. I have the most efficient data transfer to the 4th node? sudo -u ceph ceph osd crush reweight osd.23 1 sudo -u ceph ceph osd crush reweight osd.24 1 sudo -u ceph ceph osd crush reweight osd.25 1 sudo -u ceph ceph osd crush