> We're in a situation where we've got nodes that we can use, so it's mostly 
> about the drive cost and configuration.

Ack.

> The nodes are in chassis of 4 nodes each. Would there be any problem with 
> putting a 4+2 or 3+3 config into 8 nodes?

With `host` as the CRUSH failure domain, that works.

> 
> And any problem with having 7 monitors running on Proxmox nodes that are part 
> of the Ceph cluster but aren't running any OSDs?

Don't deploy 7.  5 is plenty. No reason to couple mons with OSDs. It's a good 
idea, though, to ensure that you spread mons across failure domains, so that 
losing one host or chassis doesn't lead to a loss of quorum.

> 
> (Obviously we wouldn't be able to survive the loss of a chassis in this 
> config.)

Indeed.  I suggest, if Proxmox lets you, encoding the chassis into your CRUSH 
topology, even if you don't use it today. Maybe in the future you can add 
chassis, which would expand your CRUSH options.

root default
 chassis chassis1  
  host host1
  host host2
  host host3
  host host4
 chassis chassis2
  host host5
...

With `host` as the CRUSH failure domain, you do run the risk of a PG being 
placed on 3 or even 4 nodes within a single chassis, which would not be good.  
A custom CRUSH rule might let you constrain so that PG placements always span 
chassis; those are beyond my ken.  Someone else on the list could likely help 
with that rarified task.

Alternately there are the shiny new MSR CRUSH rules:
https://docs.ceph.com/en/latest/dev/crush-msr/
https://www.youtube.com/watch?v=JpkLPkizUt4

> 
> Andrew
> 
> 
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to