Hi James,
at GARR we recently tested the cinder multi backend with the following idea
in mind:
support 3 different backends:

   - a default one, for general-purpose disks like virtual machine boot
   disks: replicated pool with replica factor equal to 3
   - a reduced redundancy one: replicated pool with replica factor 2, which
   should slightly improve latency
   - a large capacity one: erasure-coded (possibly with a small frontend
   replicated pool)

Premise: we have a juju deployed O~S (spanning 3 geographical data centers).

We configured Cinder such that it allows selection between multiple “Volume
Types”, where each Volume Type points to a distinct Ceph pool within the
same Ceph cluster.

This is the simplest configuration, as it involves Cinder configuration
alone. Volumes which are created can be later attached to running
instances, but all instances will have their boot disk on the default pool

We faced some issues as reported in details here:

Would be interesting to find a way to be able to select the pool also for
the boot disk of a VM

Any comment, idea, "whatever" (also on the doc) is very much appreciated

best Alex

Dr. Alex Barchiesi
Senior cloud architect
Art -Science relationships responsible

GARR CSD department

Rome GARR: +39 06 4962 2302
Lausanne EPFL: +41 (0) 774215266

linkedin: alex barchiesi
I started with nothing and I still have most of it.

On Sat, Apr 14, 2018 at 3:25 AM, James Beedy <jamesbe...@gmail.com> wrote:

> Looking for examples that describe how to consume multiple ceph backends
> using the cinder-ceph charm.
> Thanks
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
Juju mailing list
Modify settings or unsubscribe at: 

Reply via email to