Re: [ceph-users] Questions regarding hardware design of an SSD only cluster

2018-04-24 Thread Christian Balzer
Hello, On Tue, 24 Apr 2018 11:39:33 +0200 Florian Florensa wrote: > 2018-04-24 3:24 GMT+02:00 Christian Balzer : > > Hello, > > > > Hi Christian, and thanks for your detailed answer. > > > On Mon, 23 Apr 2018 17:43:03 +0200 Florian Florensa wrote: > > > >> Hello everyone, >

Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

2018-04-24 Thread Linh Vu
Thanks Patrick! Good to know that it's nothing and will be fixed soon :) From: Patrick Donnelly Sent: Wednesday, 25 April 2018 5:17:57 AM To: Linh Vu Cc: ceph-users Subject: Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with

Re: [ceph-users] Have an inconsistent PG, repair not working

2018-04-24 Thread David Turner
Neither the issue I created nor Michael's [1] ticket that it was rolled into are getting any traction. How are y'all fairing with your clusters? I've had 3 PGs inconsistent with 5 scrub errors for a few weeks now. I assumed that the third PG was just like the first 2 in that it couldn't be

[ceph-users] v12.2.5 Luminous released

2018-04-24 Thread Abhishek
Hello cephers, We're glad to announce the fifth bugfix release of Luminous v12.2.x long term stable release series. This release contains a range of bug fixes across all compoenents of Ceph. We recommend all the users of 12.2.x series to update. Notable Changes --- * MGR The

Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

2018-04-24 Thread Patrick Donnelly
Hello Linh, On Tue, Apr 24, 2018 at 12:34 AM, Linh Vu wrote: > However, on our production cluster, with more powerful MDSes (10 cores > 3.4GHz, 256GB RAM, much faster networking), I get this in the logs > constantly: > > 2018-04-24 16:29:21.998261 7f02d1af9700 0

[ceph-users] Poor read performance.

2018-04-24 Thread Jonathan Proulx
Hi All, I seem to be seeing consitently poor read performance on my cluster relative to both write performance and read perormance of a single backend disk, by quite a lot. cluster is luminous with 174 7.2k SAS drives across 12 storage servers with 10G ethernet and jumbo frames. Drives are mix

Re: [ceph-users] Cephalocon APAC 2018 report, videos and slides

2018-04-24 Thread Ronny Aasen
On 24.04.2018 17:30, Leonardo Vaz wrote: Hi, Last night I posted the Cephalocon 2018 conference report on the Ceph blog[1], published the video recordings from the sessions on YouTube[2] and the slide decks on Slideshare[3]. [1] https://ceph.com/community/cephalocon-apac-2018-report/ [2]

Re: [ceph-users] configuration section for each host

2018-04-24 Thread Ronny Aasen
On 24.04.2018 18:24, Robert Stanford wrote:  In examples I see that each host has a section in ceph.conf, on every host (host-a has a section in its conf on host-a, but there's also a host-a section in the ceph.conf on host-b, etc.) Is this really necessary?  I've been using just generic osd

Re: [ceph-users] Cephalocon APAC 2018 report, videos and slides

2018-04-24 Thread kefu chai
On Tue, Apr 24, 2018 at 11:30 PM, Leonardo Vaz wrote: > Hi, > > Last night I posted the Cephalocon 2018 conference report on the Ceph > blog[1], published the video recordings from the sessions on > YouTube[2] and the slide decks on Slideshare[3]. > > [1]

Re: [ceph-users] RGW GC Processing Stuck

2018-04-24 Thread Sean Redmond
Hi, sure no problem, I posted it here http://tracker.ceph.com/issues/23839 On Tue, 24 Apr 2018, 16:04 Matt Benjamin, wrote: > Hi Sean, > > Could you create an issue in tracker.ceph.com with this info? That > would make it easier to iterate on. > > thanks and regards, > >

[ceph-users] Cephalocon APAC 2018 report, videos and slides

2018-04-24 Thread Leonardo Vaz
Hi, Last night I posted the Cephalocon 2018 conference report on the Ceph blog[1], published the video recordings from the sessions on YouTube[2] and the slide decks on Slideshare[3]. [1] https://ceph.com/community/cephalocon-apac-2018-report/ [2]

[ceph-users] [rgw] user stats understanding

2018-04-24 Thread Rudenko Aleksandr
Hi, friends. We use RGW user stats in our billing. Example on Luminous: radosgw-admin usage show --uid 5300c830-82e2-4dce-ac6d-1d97a65def33 { "entries": [ { "user": "5300c830-82e2-4dce-ac6d-1d97a65def33", "buckets": [ {

Re: [ceph-users] Dying OSDs

2018-04-24 Thread Jan Marquardt
Hi, it's been a while, but we are still fighting with this issue. As suggested we deleted all snapshots, but the errors still occur. We were able to gather some more information: The reason why they are crashing is this assert:

Re: [ceph-users] RGW GC Processing Stuck

2018-04-24 Thread Matt Benjamin
Hi Sean, Could you create an issue in tracker.ceph.com with this info? That would make it easier to iterate on. thanks and regards, Matt On Tue, Apr 24, 2018 at 10:45 AM, Sean Redmond wrote: > Hi, > We are currently using Jewel 10.2.7 and recently, we have been

Re: [ceph-users] Questions regarding hardware design of an SSD only cluster

2018-04-24 Thread Wido den Hollander
On 04/24/2018 05:01 AM, Mohamad Gebai wrote: > > > On 04/23/2018 09:24 PM, Christian Balzer wrote: >> >>> If anyone has some ideas/thoughts/pointers, I would be glad to hear them. >>> >> RAM, you'll need a lot of it, even more with Bluestore given the current >> caching. >> I'd say 1GB per TB

[ceph-users] RGW GC Processing Stuck

2018-04-24 Thread Sean Redmond
Hi, We are currently using Jewel 10.2.7 and recently, we have been experiencing some issues with objects being deleted using the gc. After a bucket was unsuccessfully deleted using –purge-objects (first error next discussed occurred), all of the rgw’s are occasionally becoming unresponsive and

Re: [ceph-users] Questions regarding hardware design of an SSD only cluster

2018-04-24 Thread Florian Florensa
2018-04-24 3:24 GMT+02:00 Christian Balzer : > Hello, > Hi Christian, and thanks for your detailed answer. > On Mon, 23 Apr 2018 17:43:03 +0200 Florian Florensa wrote: > >> Hello everyone, >> >> I am in the process of designing a Ceph cluster, that will contain >> only SSD OSDs,

Re: [ceph-users] Ceph 12.2.4 MGR spams syslog with "mon failed to return metadata for mds"

2018-04-24 Thread John Spray
On Fri, Apr 20, 2018 at 11:29 AM, Charles Alva wrote: > Marc, > > Thanks. > > The mgr log spam occurs even without dashboard module enabled. I never > checked the ceph mgr log before because the ceph cluster is always healthy. > Based on the ceph mgr logs in syslog, the

Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

2018-04-24 Thread Linh Vu
Hi Dan, Thanks! Ah so the "nicely exporting" thing is just a distraction, that's good to know. I did bump mds log max segments and max expiring to 240 after reading the previous discussion. It seemed to help when there was just 1 active MDS. It doesn't really do much at the moment, although

[ceph-users] Fixing Remapped PG's

2018-04-24 Thread Dilip Renkila
Hi all, We have a ceph kraken cluster. Last week, we lost an OSD server. Then we added one more OSD servers with same configuration.Then we let cluster to recover,but i think it didn't happened.Still most of PG's are stuck in remapped and in degraded state. When i restart all osd daemons, it

Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

2018-04-24 Thread Dan van der Ster
That "nicely exporting" thing is a logging issue that was apparently fixed in https://github.com/ceph/ceph/pull/19220. I'm not sure if that will be backported to luminous. Otherwise the slow requests could be due to either slow trimming (see previous discussions about mds log max expiring and mds

[ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

2018-04-24 Thread Linh Vu
Hi all, I have a cluster running cephfs on Luminous 12.2.4, using 2 active MDSes + 1 standby. I have 3 shares: /projects, /home and /scratch, and I've decided to try manual pinning as described here: http://docs.ceph.com/docs/master/cephfs/multimds/ /projects is pinned to mds.0 (rank 0)

Re: [ceph-users] London Ceph day yesterday

2018-04-24 Thread Wido den Hollander
On 04/23/2018 12:09 PM, John Spray wrote: > On Fri, Apr 20, 2018 at 9:32 AM, Sean Purdy wrote: >> Just a quick note to say thanks for organising the London Ceph/OpenStack >> day. I got a lot out of it, and it was nice to see the community out in >> force. > > +1,