[Group.of.nepali.translators] [Bug 1819437] Re: transient mon<->osd connectivity HEALTH_WARN events don't self clear in 13.2.4

2020-11-13 Thread Dan Hill
The 12.2.13 SRU for bionic, and queens is available in -updates (bug
1861793).

** Changed in: ceph (Ubuntu Bionic)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1819437

Title:
  transient mon<->osd connectivity HEALTH_WARN events don't self clear
  in 13.2.4

Status in ceph package in Ubuntu:
  Fix Released
Status in ceph source package in Xenial:
  Invalid
Status in ceph source package in Bionic:
  Fix Released
Status in ceph source package in Eoan:
  Fix Released
Status in ceph source package in Focal:
  Fix Released

Bug description:
  In a recently juju deployed 13.2.4 ceph cluster (as part of an
  OpenStack Rocky deploy) we experienced a none clearing HEALTH_WARN
  event that appeared to be associated with a short planned network
  outage, but did not clear without human intervention:

  health: HEALTH_WARN
  6 slow ops, oldest one blocked for 112899 sec, daemons 
[mon.shinx,mon.sliggoo] have slow ops.

  We can correlate this back to a known network event, but all OSDs are
  up and the cluster otherwise looks healthy:

  ubuntu@juju-df624b-4-lxd-14:~$ sudo ceph osd tree
  ID  CLASS WEIGHT  TYPE NAMESTATUS REWEIGHT PRI-AFF 
   -1   7.64076 root default 
  -13   0.90970 host happiny 
8   hdd 0.90970 osd.8up  1.0 1.0 
   -5   0.90970 host jynx
9   hdd 0.90970 osd.9up  1.0 1.0 
   -3   1.63739 host piplup  
0   hdd 0.81870 osd.0up  1.0 1.0 
3   hdd 0.81870 osd.3up  1.0 1.0 
   -9   1.63739 host raichu  
5   hdd 0.81870 osd.5up  1.0 1.0 
6   hdd 0.81870 osd.6up  1.0 1.0 
  -11   0.90919 host shinx   
7   hdd 0.90919 osd.7up  1.0 1.0 
   -7   1.63739 host sliggoo 
1   hdd 0.81870 osd.1up  1.0 1.0 
4   hdd 0.81870 osd.4up  1.0 1.0 

  
  ubuntu@shinx:~$ sudo ceph daemon mon.shinx ops
  {
  "ops": [
  {
  "description": "osd_failure(failed timeout osd.0 
10.48.2.158:6804/211414 for 31sec e911 v911)",
  "initiated_at": "2019-03-07 00:40:43.282823",
  "age": 113953.696205,
  "duration": 113953.696225,
  "type_data": {
  "events": [
  {
  "time": "2019-03-07 00:40:43.282823",
  "event": "initiated"
  },
  {
  "time": "2019-03-07 00:40:43.282823",
  "event": "header_read"
  },
  {
  "time": "0.00",
  "event": "throttled"
  },
  {
  "time": "0.00",
  "event": "all_read"
  },
  {
  "time": "0.00",
  "event": "dispatched"
  },
  {
  "time": "2019-03-07 00:40:43.283360",
  "event": "mon:_ms_dispatch"
  },
  {
  "time": "2019-03-07 00:40:43.283360",
  "event": "mon:dispatch_op"
  },
  {
  "time": "2019-03-07 00:40:43.283360",
  "event": "psvc:dispatch"
  },
  {
  "time": "2019-03-07 00:40:43.283370",
  "event": "osdmap:preprocess_query"
  },
  {
  "time": "2019-03-07 00:40:43.283371",
  "event": "osdmap:preprocess_failure"
  },
  {
  "time": "2019-03-07 00:40:43.283386",
  "event": "osdmap:prepare_update"
  },
  {
  "time": "2019-03-07 00:40:43.283386",
  "event": "osdmap:prepare_failure"
  }
  ],
  "info": {
  "seq": 48576937,
  "src_is_mon": false,
  "source": "osd.8 10.48.2.206:6800/1226277",
  "forwarded_to_leader": false
  }
  }
  },
  

[Group.of.nepali.translators] [Bug 1819437] Re: transient mon<->osd connectivity HEALTH_WARN events don't self clear in 13.2.4

2020-04-15 Thread Dan Hill
** Changed in: ceph (Ubuntu Eoan)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1819437

Title:
  transient mon<->osd connectivity HEALTH_WARN events don't self clear
  in 13.2.4

Status in ceph package in Ubuntu:
  Fix Released
Status in ceph source package in Xenial:
  Invalid
Status in ceph source package in Bionic:
  In Progress
Status in ceph source package in Eoan:
  Fix Released
Status in ceph source package in Focal:
  Fix Released

Bug description:
  In a recently juju deployed 13.2.4 ceph cluster (as part of an
  OpenStack Rocky deploy) we experienced a none clearing HEALTH_WARN
  event that appeared to be associated with a short planned network
  outage, but did not clear without human intervention:

  health: HEALTH_WARN
  6 slow ops, oldest one blocked for 112899 sec, daemons 
[mon.shinx,mon.sliggoo] have slow ops.

  We can correlate this back to a known network event, but all OSDs are
  up and the cluster otherwise looks healthy:

  ubuntu@juju-df624b-4-lxd-14:~$ sudo ceph osd tree
  ID  CLASS WEIGHT  TYPE NAMESTATUS REWEIGHT PRI-AFF 
   -1   7.64076 root default 
  -13   0.90970 host happiny 
8   hdd 0.90970 osd.8up  1.0 1.0 
   -5   0.90970 host jynx
9   hdd 0.90970 osd.9up  1.0 1.0 
   -3   1.63739 host piplup  
0   hdd 0.81870 osd.0up  1.0 1.0 
3   hdd 0.81870 osd.3up  1.0 1.0 
   -9   1.63739 host raichu  
5   hdd 0.81870 osd.5up  1.0 1.0 
6   hdd 0.81870 osd.6up  1.0 1.0 
  -11   0.90919 host shinx   
7   hdd 0.90919 osd.7up  1.0 1.0 
   -7   1.63739 host sliggoo 
1   hdd 0.81870 osd.1up  1.0 1.0 
4   hdd 0.81870 osd.4up  1.0 1.0 

  
  ubuntu@shinx:~$ sudo ceph daemon mon.shinx ops
  {
  "ops": [
  {
  "description": "osd_failure(failed timeout osd.0 
10.48.2.158:6804/211414 for 31sec e911 v911)",
  "initiated_at": "2019-03-07 00:40:43.282823",
  "age": 113953.696205,
  "duration": 113953.696225,
  "type_data": {
  "events": [
  {
  "time": "2019-03-07 00:40:43.282823",
  "event": "initiated"
  },
  {
  "time": "2019-03-07 00:40:43.282823",
  "event": "header_read"
  },
  {
  "time": "0.00",
  "event": "throttled"
  },
  {
  "time": "0.00",
  "event": "all_read"
  },
  {
  "time": "0.00",
  "event": "dispatched"
  },
  {
  "time": "2019-03-07 00:40:43.283360",
  "event": "mon:_ms_dispatch"
  },
  {
  "time": "2019-03-07 00:40:43.283360",
  "event": "mon:dispatch_op"
  },
  {
  "time": "2019-03-07 00:40:43.283360",
  "event": "psvc:dispatch"
  },
  {
  "time": "2019-03-07 00:40:43.283370",
  "event": "osdmap:preprocess_query"
  },
  {
  "time": "2019-03-07 00:40:43.283371",
  "event": "osdmap:preprocess_failure"
  },
  {
  "time": "2019-03-07 00:40:43.283386",
  "event": "osdmap:prepare_update"
  },
  {
  "time": "2019-03-07 00:40:43.283386",
  "event": "osdmap:prepare_failure"
  }
  ],
  "info": {
  "seq": 48576937,
  "src_is_mon": false,
  "source": "osd.8 10.48.2.206:6800/1226277",
  "forwarded_to_leader": false
  }
  }
  },
  {
  "description": "osd_failure(failed timeout osd.3 

[Group.of.nepali.translators] [Bug 1819437] Re: transient mon<->osd connectivity HEALTH_WARN events don't self clear in 13.2.4

2020-02-13 Thread Eric Desrochers
** Changed in: ceph (Ubuntu Focal)
   Status: New => Fix Released

** Changed in: ceph (Ubuntu Xenial)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1819437

Title:
  transient mon<->osd connectivity HEALTH_WARN events don't self clear
  in 13.2.4

Status in ceph package in Ubuntu:
  Fix Released
Status in ceph source package in Xenial:
  Invalid
Status in ceph source package in Bionic:
  New
Status in ceph source package in Eoan:
  New
Status in ceph source package in Focal:
  Fix Released

Bug description:
  In a recently juju deployed 13.2.4 ceph cluster (as part of an
  OpenStack Rocky deploy) we experienced a none clearing HEALTH_WARN
  event that appeared to be associated with a short planned network
  outage, but did not clear without human intervention:

  health: HEALTH_WARN
  6 slow ops, oldest one blocked for 112899 sec, daemons 
[mon.shinx,mon.sliggoo] have slow ops.

  We can correlate this back to a known network event, but all OSDs are
  up and the cluster otherwise looks healthy:

  ubuntu@juju-df624b-4-lxd-14:~$ sudo ceph osd tree
  ID  CLASS WEIGHT  TYPE NAMESTATUS REWEIGHT PRI-AFF 
   -1   7.64076 root default 
  -13   0.90970 host happiny 
8   hdd 0.90970 osd.8up  1.0 1.0 
   -5   0.90970 host jynx
9   hdd 0.90970 osd.9up  1.0 1.0 
   -3   1.63739 host piplup  
0   hdd 0.81870 osd.0up  1.0 1.0 
3   hdd 0.81870 osd.3up  1.0 1.0 
   -9   1.63739 host raichu  
5   hdd 0.81870 osd.5up  1.0 1.0 
6   hdd 0.81870 osd.6up  1.0 1.0 
  -11   0.90919 host shinx   
7   hdd 0.90919 osd.7up  1.0 1.0 
   -7   1.63739 host sliggoo 
1   hdd 0.81870 osd.1up  1.0 1.0 
4   hdd 0.81870 osd.4up  1.0 1.0 

  
  ubuntu@shinx:~$ sudo ceph daemon mon.shinx ops
  {
  "ops": [
  {
  "description": "osd_failure(failed timeout osd.0 
10.48.2.158:6804/211414 for 31sec e911 v911)",
  "initiated_at": "2019-03-07 00:40:43.282823",
  "age": 113953.696205,
  "duration": 113953.696225,
  "type_data": {
  "events": [
  {
  "time": "2019-03-07 00:40:43.282823",
  "event": "initiated"
  },
  {
  "time": "2019-03-07 00:40:43.282823",
  "event": "header_read"
  },
  {
  "time": "0.00",
  "event": "throttled"
  },
  {
  "time": "0.00",
  "event": "all_read"
  },
  {
  "time": "0.00",
  "event": "dispatched"
  },
  {
  "time": "2019-03-07 00:40:43.283360",
  "event": "mon:_ms_dispatch"
  },
  {
  "time": "2019-03-07 00:40:43.283360",
  "event": "mon:dispatch_op"
  },
  {
  "time": "2019-03-07 00:40:43.283360",
  "event": "psvc:dispatch"
  },
  {
  "time": "2019-03-07 00:40:43.283370",
  "event": "osdmap:preprocess_query"
  },
  {
  "time": "2019-03-07 00:40:43.283371",
  "event": "osdmap:preprocess_failure"
  },
  {
  "time": "2019-03-07 00:40:43.283386",
  "event": "osdmap:prepare_update"
  },
  {
  "time": "2019-03-07 00:40:43.283386",
  "event": "osdmap:prepare_failure"
  }
  ],
  "info": {
  "seq": 48576937,
  "src_is_mon": false,
  "source": "osd.8 10.48.2.206:6800/1226277",
  "forwarded_to_leader": false
  }
  }
  },
  {
  "description":