Public bug reported:

Version: 0.48.2-0ubuntu2~cloud0

On a Ceph cluster with 18 OSDs, new object pools are being created with
a pg_num of 8.  Upstream recommends that there be more like 100 or so
PGs per OSD: http://article.gmane.org/gmane.comp.file-
systems.ceph.devel/10242

I've worked around this by removing and recreating the pools with a
higher pg_num before we started using the cluster, but since we aim for
fully automated deployment (using Juju and MaaS) this is suboptimal.

** Affects: cloud-archive
     Importance: Undecided
         Status: New

** Affects: ceph (Ubuntu)
     Importance: Undecided
         Status: New


** Tags: canonistack

** Also affects: cloud-archive
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1098314

Title:
  pg_num inappropriately low on new pools

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1098314/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs

Reply via email to