@dan, could you fill the [Regression Potential] section ? 
I already completed the rest.

I want to make sure I don't miss anything in the regression potential.
I'd prefer if you can take 5 minutes to do it.

- Eric

** Description changed:

  [Impact]
+ 
+ https://docs.ceph.com/docs/master/rados/operations/placement-groups/
+ 
+ VIEWING PG SCALING RECOMMENDATIONS
+ You can view each pool, its relative utilization, and any suggested changes 
to the PG count with this command:
+ 
+ ceph osd pool autoscale-status
+ https://docs.ceph.com/docs/mimic/mgr/balancer/
+ 
+ STATUS
+ The current status of the balancer can be check at any time with:
+ 
+ ceph balancer status
  
  [Test Case]
  
+ * Install latest sosreport found in -updates
+ * Run sosreport -o ceph or sos report -o ceph
+ * Look content inside /path_to_sosreport/sos_command/ceph/
+ * Make sure the 2 new commands are found there.
+ 
  [Regression Potential]
+ 
  
  [Other Info]
  
  [Original Description]
  It would be nice to collect:
  
  ceph osd pool autoscale-status
  ceph balancer status
  
  Upstream report: https://github.com/sosreport/sos/issues/2211
  Upstream commit: 
https://github.com/sosreport/sos/commit/52f4661e2b594134b98e2967b02cc860d7963fef

** Description changed:

  [Impact]
+ 
+ It would be nice to collect:
+ 
+ ceph osd pool autoscale-status
+ ceph balancer status
  
  https://docs.ceph.com/docs/master/rados/operations/placement-groups/
  
  VIEWING PG SCALING RECOMMENDATIONS
  You can view each pool, its relative utilization, and any suggested changes 
to the PG count with this command:
  
  ceph osd pool autoscale-status
  https://docs.ceph.com/docs/mimic/mgr/balancer/
  
  STATUS
  The current status of the balancer can be check at any time with:
  
  ceph balancer status
  
  [Test Case]
  
  * Install latest sosreport found in -updates
  * Run sosreport -o ceph or sos report -o ceph
  * Look content inside /path_to_sosreport/sos_command/ceph/
  * Make sure the 2 new commands are found there.
  
  [Regression Potential]
  
- 
  [Other Info]
  
  [Original Description]
  It would be nice to collect:
  
  ceph osd pool autoscale-status
  ceph balancer status
  
  Upstream report: https://github.com/sosreport/sos/issues/2211
  Upstream commit: 
https://github.com/sosreport/sos/commit/52f4661e2b594134b98e2967b02cc860d7963fef

** Description changed:

  [Impact]
  
  It would be nice to collect:
  
  ceph osd pool autoscale-status
  ceph balancer status
  
  https://docs.ceph.com/docs/master/rados/operations/placement-groups/
  
  VIEWING PG SCALING RECOMMENDATIONS
  You can view each pool, its relative utilization, and any suggested changes 
to the PG count with this command:
  
  ceph osd pool autoscale-status
  https://docs.ceph.com/docs/mimic/mgr/balancer/
  
  STATUS
  The current status of the balancer can be check at any time with:
  
  ceph balancer status
  
  [Test Case]
  
  * Install latest sosreport found in -updates
- * Run sosreport -o ceph or sos report -o ceph
+ * Run sosreport -o ceph (version 3.X and/or 4.X) or sos report -o ceph (4.X 
only) 
  * Look content inside /path_to_sosreport/sos_command/ceph/
  * Make sure the 2 new commands are found there.
  
  [Regression Potential]
  
  [Other Info]
  
  [Original Description]
  It would be nice to collect:
  
  ceph osd pool autoscale-status
  ceph balancer status
  
  Upstream report: https://github.com/sosreport/sos/issues/2211
  Upstream commit: 
https://github.com/sosreport/sos/commit/52f4661e2b594134b98e2967b02cc860d7963fef

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1893109

Title:
  [plugin][ceph] collect ceph balancer and pr-autoscale status

To manage notifications about this bug go to:
https://bugs.launchpad.net/sosreport/+bug/1893109/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to