Re: [ceph-users] MDS does not always failover to hot standby on reboot

2018-09-03 Thread William Lawton
Which configuration option determines the MDS timeout period? William Lawton From: Gregory Farnum Sent: Thursday, August 30, 2018 5:46 PM To: William Lawton Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] MDS does not always failover to hot standby on reboot Yes, this is a consequence

Re: [ceph-users] MDS does not always failover to hot standby on reboot

2018-08-30 Thread William Lawton
hey’ve been through their own election and a full mds timeout window. On Thu, Aug 30, 2018 at 11:46 AM William Lawton mailto:william.law...@irdeto.com>> wrote: Thanks for the response Greg. We did originally have co-located mds and mon but realised this wasn't a good idea early on and s

Re: [ceph-users] MDS does not always failover to hot standby on reboot

2018-08-30 Thread William Lawton
ee that will *prevent* it from connecting to its own monitor if there are failures or reconnects after first startup. -Greg On Thu, Aug 30, 2018 at 8:38 AM William Lawton mailto:william.law...@irdeto.com>> wrote: Hi. We have a 5 node Ceph cluster (refer to ceph -s output at bottom of

[ceph-users] MDS does not always failover to hot standby on reboot

2018-08-30 Thread William Lawton
, 1 up:standby-replay osd: 4 osds: 4 up, 4 in data: pools: 2 pools, 200 pgs objects: 554 objects, 980 MiB usage: 7.9 GiB used, 1.9 TiB / 2.0 TiB avail pgs: 200 active+clean io: client: 1.5 MiB/s rd, 810 KiB/s wr, 286 op/s rd, 218 op/s wr Hope someone can he

[ceph-users] prevent unnecessary MON leader re-election

2018-08-29 Thread William Lawton
up:standby-replay osd: 4 osds: 4 up, 4 in data: pools: 2 pools, 200 pgs objects: 554 objects, 980 MiB usage: 7.9 GiB used, 1.9 TiB / 2.0 TiB avail pgs: 200 active+clean io: client: 1.5 MiB/s rd, 810 KiB/s wr, 286 op/s rd, 218 op/s wr William Lawton

Re: [ceph-users] Intermittent client reconnect delay following node fail

2018-08-23 Thread William Lawton
e-election if the current MON leader is lost? Thanks William Lawton -Original Message----- From: William Lawton Sent: Wednesday, August 01, 2018 2:05 PM To: 'John Spray' Cc: ceph-users@lists.ceph.com; Mark Standley Subject: RE: [ceph-users] Intermittent client reconnect delay

Re: [ceph-users] Intermittent client reconnect delay following node fail

2018-08-01 Thread William Lawton
logs like the following: Aug 1 10:39:06 dub-ditv-sim-goldenimage kernel: libceph: mon0 10.18.49.35:6789 session lost, hunting for new mon We're currently exploring whether keeping the mds and mon daemons on separate servers has less impact on the client when either one is lost. Will

Re: [ceph-users] Intermittent client reconnect delay following node fail

2018-08-01 Thread William Lawton
ount point for a period of time and if so, how long should we consider an abnormal period. William Lawton -Original Message- From: John Spray Sent: Tuesday, July 31, 2018 11:17 AM To: William Lawton Cc: ceph-users@lists.ceph.com; Mark Standley Subject: Re: [ceph-users] Intermitt

[ceph-users] Intermittent client reconnect delay following node fail

2018-07-30 Thread William Lawton
.0 TiB avail pgs: 200 active+clean Thanks William Lawton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com