Public bug reported:

Version: 0.48.2-0ubuntu2~cloud0

Our Ceph deployments typically involve multiple OSDs per host with no
disk redundancy. However the default crush rules appears to distribute
by OSD, not by host, which I believe will not prevent replicas from
landing on the same host.

I've been working around this by updating the crush rules as follows and
installing the resulting crushmap in the cluster, but since we aim for
fully automated deployment (using Juju and MaaS) this is suboptimal.

--- crushmap.txt        2013-01-10 20:33:21.265809301 +0000
+++ crushmap.new        2013-01-10 20:32:49.496745778 +0000
@@ -104,7 +104,7 @@
        min_size 1
        max_size 10
        step take default
-       step choose firstn 0 type osd
+       step chooseleaf firstn 0 type host
        step emit
 }
 rule metadata {
@@ -113,7 +113,7 @@
        min_size 1
        max_size 10
        step take default
-       step choose firstn 0 type osd
+       step chooseleaf firstn 0 type host
        step emit
 }
 rule rbd {
@@ -122,7 +122,7 @@
        min_size 1
        max_size 10
        step take default
-       step choose firstn 0 type osd
+       step chooseleaf firstn 0 type host
        step emit
 }

** Affects: cloud-archive
     Importance: Undecided
         Status: New

** Affects: ceph (Ubuntu)
     Importance: Undecided
         Status: New


** Tags: canonistack

** Also affects: ceph (Ubuntu)
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1098320

Title:
  ceph: default crush rule does not suit multi-OSD deployments

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1098320/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs

Reply via email to