Ah, really good question :)

I believe it is stored locally on the monitor host. Saving the cluster map into 
RADOS would result in a chicken or egg problem.

This is supported by the following two sections in the docs: 
 1. 
https://docs.ceph.com/docs/master/rados/configuration/mon-config-ref/#background
 2. https://docs.ceph.com/docs/master/rados/configuration/mon-config-ref/#data

I can confirm that on my testing cephadm installation (octopus release) the 
`/var/lib/ceph/<cluster_id>/mon.<hostname> dir contains different timestamps on 
different monitors (it was flushed to disk at different times).

G.

On Tue, Aug 4, 2020, at 1:34 PM, Edward kalk wrote:
> Thank you Gregor for the reply. I have read that page. It does say what a 
> Crush map is and how it’s used by monitors and OSDs, but does not say how or 
> where the map is stored in the system. Is it replicated on all OSD, vis a 
> distributed hidden pool? Is it stored on the local linux disk of the host 
> operating system on which the monitor daemons run? 
> -Ed
> 
> -Ed
> 
>> On Aug 3, 2020, at 4:43 PM, Gregor Krmelj <[email protected]> wrote:
>> 
>> The CRUSH map makes up the so called "Cluster Map", for which the Ceph 
>> Monitors maintain a master copy. This is precisely why you have multiple 
>> monitors - for high availability in case a monitor goes down.
>> 
>> This is all explained quite well in the architecture documentation: 
>> https://docs.ceph.com/docs/master/architecture/#cluster-map.
>> 
>> Regards,
>> G.
>> 
>> On Mon, Aug 3, 2020, at 2:01 PM, Edward kalk wrote:
>>> The metadata that tells CEPH where all data is located, my understanding is 
>>> the crush map. Where is it stored, is it redundantly distributed so as to 
>>> protect from node failure? What safeguards the critical cluster metadata?
>>> 
>>> -Ed
>>> _______________________________________________
>>> ceph-users mailing list -- [email protected]
>>> To unsubscribe send an email to [email protected]
>>> 
>> 
>> 

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to