Thanks heaps Nathan. That's what we thoughts and we wanted implement but I
wanted to double check with the community.


Cheers


On Thu, Nov 21, 2019 at 2:42 PM Nathan Fish <[email protected]> wrote:

> The default crush rule uses "host" as the failure domain, so in order
> to deploy on one host you will need to make a crush rule that
> specifies "osd". Then simply adding more hosts with osds will result
> in automatic rebalancing. Once you have enough hosts to satisfy the
> crush rule ( 3 for replicated size = 3) you can change the pool(s)
> back to the default rule.
>
> On Thu, Nov 21, 2019 at 7:46 AM Alfredo De Luca
> <[email protected]> wrote:
> >
> > Hi all.
> > We are doing some tests on how to scale out nodes on Ceph Nautilus.
> > Basically we want to try to install Ceph on one node and scale up to 2+
> nodes. How to do so?
> >
> > Every nodes has 6 disks and maybe  we can use the crushmap to achieve
> this?
> >
> > Any thoughts/ideas/recommendations?
> >
> >
> > Cheers
> >
> >
> > --
> > Alfredo
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
*Alfredo*
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to