Re: [ceph-users] 12.2.4 Both Ceph MDS nodes crashed. Please help.

2018-05-04 Thread Yan, Zheng
On Wed, May 2, 2018 at 7:19 AM, Sean Sullivan wrote: > Forgot to reply to all: > > Sure thing! > > I couldn't install the ceph-mds-dbg packages without upgrading. I just > finished upgrading the cluster to 12.2.5. The issue still persists in 12.2.5 > > From here I'm not

Re: [ceph-users] Place on separate hosts?

2018-05-04 Thread Gregory Farnum
And also make sure the OSD<-> mapping is correct with "ceph osd tree". :) On Fri, May 4, 2018 at 1:44 AM Matthew Vernon wrote: > Hi, > > On 04/05/18 08:25, Tracy Reed wrote: > > On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly: > >> >

[ceph-users] issues on CT + EC pool

2018-05-04 Thread Luis Periquito
Hi, I have a big-ish cluster that, amongst other things, has a radosgw configured to have an EC data pool (k=12, m=4). The cluster is currently running Jewel (10.2.7). That pool spans 244 HDDs and has 2048 PGs. from the df detail: .rgw.buckets.ec 26 -N/A

Re: [ceph-users] mgr dashboard differs from ceph status

2018-05-04 Thread Gregory Farnum
On Fri, May 4, 2018 at 1:59 AM John Spray wrote: > On Fri, May 4, 2018 at 7:21 AM, Tracy Reed wrote: > > My ceph status says: > > > > cluster: > > id: b2b00aae-f00d-41b4-a29b-58859aa41375 > > health: HEALTH_OK > > > > services: > >

Re: [ceph-users] OSD doesnt start after reboot

2018-05-04 Thread Akshita Parekh
yes correct,but the main issue is, the osd configuration gets lost after every reboot On Fri, May 4, 2018 at 6:11 PM, Alfredo Deza wrote: > On Fri, May 4, 2018 at 1:22 AM, Akshita Parekh > wrote: > > Steps followed during installing ceph- > > 1)

Re: [ceph-users] 12.2.4 Both Ceph MDS nodes crashed. Please help.

2018-05-04 Thread Sean Sullivan
Most of this is over my head but the last line of the logs on both mds servers show something similar to: 0> 2018-05-01 15:37:46.871932 7fd10163b700 -1 *** Caught signal (Segmentation fault) ** in thread 7fd10163b700 thread_name:mds_rank_progr When I search for this in ceph user and devel

Re: [ceph-users] Luminous radosgw S3/Keystone integration issues

2018-05-04 Thread Matt Benjamin
Hi Dan, We agreed in upstream RGW to make this change. Do you intend to submit this as a PR? regards Matt On Fri, May 4, 2018 at 10:57 AM, Dan van der Ster wrote: > Hi Valery, > > Did you eventually find a workaround for this? I *think* we'd also > prefer rgw to fallback

Re: [ceph-users] Luminous radosgw S3/Keystone integration issues

2018-05-04 Thread Dan van der Ster
Hi Valery, Did you eventually find a workaround for this? I *think* we'd also prefer rgw to fallback to external plugins, rather than checking them before local. But I never understood the reasoning behind the change from jewel to luminous. I saw that there is work towards a cache for ldap [1]

Re: [ceph-users] OSD doesnt start after reboot

2018-05-04 Thread Alfredo Deza
On Fri, May 4, 2018 at 1:22 AM, Akshita Parekh wrote: > Steps followed during installing ceph- > 1) Installing rpms > > Then the steps given in - > http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ , apart from step > 2 and 3 > > Then ceph-deploy osd prepare

Re: [ceph-users] mgr dashboard differs from ceph status

2018-05-04 Thread Sean Purdy
I get this too, since I last rebooted a server (one of three). ceph -s says: cluster: id: a8c34694-a172-4418-a7dd-dd8a642eb545 health: HEALTH_OK services: mon: 3 daemons, quorum box1,box2,box3 mgr: box3(active), standbys: box1, box2 osd: N osds: N up, N in rgw: 3

Re: [ceph-users] mgr dashboard differs from ceph status

2018-05-04 Thread John Spray
On Fri, May 4, 2018 at 7:21 AM, Tracy Reed wrote: > My ceph status says: > > cluster: > id: b2b00aae-f00d-41b4-a29b-58859aa41375 > health: HEALTH_OK > > services: > mon: 3 daemons, quorum ceph01,ceph03,ceph07 > mgr: ceph01(active), standbys:

Re: [ceph-users] Place on separate hosts?

2018-05-04 Thread Matthew Vernon
Hi, On 04/05/18 08:25, Tracy Reed wrote: > On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly: >> https://jcftang.github.io/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/ > >> How can I tell which way mine is configured? I could post the

Re: [ceph-users] Place on separate hosts?

2018-05-04 Thread Nicolas Huillard
Le vendredi 04 mai 2018 à 00:25 -0700, Tracy Reed a écrit : > On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly: > > https://jcftang.github.io/2012/09/06/going-from-replicating-across- > > osds-to-replicating-across-hosts-in-a-ceph-cluster/ > > > > How can I tell which way mine is

Re: [ceph-users] ceph mgr module not working

2018-05-04 Thread John Spray
On Fri, May 4, 2018 at 7:26 AM, Tracy Reed wrote: > Hello all, > > I can seemingly enable the balancer ok: > > $ ceph mgr module enable balancer > > but if I try to check its status: > > $ ceph balancer status > Error EINVAL: unrecognized command This generally indicates

Re: [ceph-users] Place on separate hosts?

2018-05-04 Thread Tracy Reed
On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly: > https://jcftang.github.io/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/ > How can I tell which way mine is configured? I could post the whole > crushmap if necessary but it's a bit

Re: [ceph-users] Place on separate hosts?

2018-05-04 Thread Tracy Reed
On Fri, May 04, 2018 at 12:08:35AM PDT, Tracy Reed spake thusly: > I've been using ceph for nearly a year and one of the things I ran into > quite a while back was that it seems like ceph is placing copies of > objects on different OSDs but sometimes those OSDs can be on the same > host by

[ceph-users] Place on separate hosts?

2018-05-04 Thread Tracy Reed
I've been using ceph for nearly a year and one of the things I ran into quite a while back was that it seems like ceph is placing copies of objects on different OSDs but sometimes those OSDs can be on the same host by default. Is that correct? I discovered this by taking down one host and having

[ceph-users] ceph mgr module not working

2018-05-04 Thread Tracy Reed
Hello all, I can seemingly enable the balancer ok: $ ceph mgr module enable balancer but if I try to check its status: $ ceph balancer status Error EINVAL: unrecognized command or turn it on: $ ceph balancer on Error EINVAL: unrecognized command $ which ceph /bin/ceph $ rpm -qf /bin/ceph

[ceph-users] mgr dashboard differs from ceph status

2018-05-04 Thread Tracy Reed
My ceph status says: cluster: id: b2b00aae-f00d-41b4-a29b-58859aa41375 health: HEALTH_OK services: mon: 3 daemons, quorum ceph01,ceph03,ceph07 mgr: ceph01(active), standbys: ceph-ceph07, ceph03 osd: 78 osds: 78 up, 78 in data: pools: 4 pools, 3240 pgs