A custom CRUSH rule can have two steps to enforce that.
> On Mar 24, 2023, at 11:04, Danny Webb <[email protected]> wrote: > > The question I have regarding this setup is, how can you guarantee that the > 12 m chunks will be located evenly across the two rooms. What would happen > if by chance all 12 chunks were in room B? Usually you use failure domains > to make sure of the distribution of chunks across domains, but you can't do > that here as you are using host as failure domain, but need room to be > somehow included in that. > ________________________________ > From: Fabien Sirjean <[email protected]> > Sent: 24 March 2023 12:00 > To: ceph-users <[email protected]> > Subject: [ceph-users] EC profiles where m>k (EC 8+12) > > CAUTION: This email originates from outside THG > > Hi Ceph users! > > I've been proposed an interesting EC setup I hadn't thought about before. > > Scenario is : we have two server rooms and want to store ~4PiB with the > ability to loose 1 server room without loss of data or RW availability. > > For the context, performance is not needed (cold storage mostly, used as > a big filesystem). > > The idea is to use EC 8+12 over 24 servers (12 on each server room), so > if we loose 1 room we still have half of the EC parts (10/20) and are > able to loose 2 more servers before reaching the point where we loose data. > > I find this pretty elegant when working on a two-sites context, as > efficiency is 40% (better than 33% three times replication) and the > redundancy is good. > > What do you think of this setup ? Did you ever used EC profiles with M > K ? > > Thanks for sharing your thoughts! > > Cheers, > > Fabien > _______________________________________________ > ceph-users mailing list -- [email protected] > To unsubscribe send an email to [email protected] > > > Danny Webb > Principal OpenStack Engineer > [email protected] > [THG Ingenuity Logo]<https://www.thg.com> > [https://i.imgur.com/wbpVRW6.png]<https://www.linkedin.com/company/thgplc/?originalSubdomain=uk> > [https://i.imgur.com/c3040tr.png] <https://twitter.com/thgplc?lang=en> > _______________________________________________ > ceph-users mailing list -- [email protected] > To unsubscribe send an email to [email protected] _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
