As a piece to the puzzle, the client always has an up to date osd map (which 
includes the crush map).  If it's out of date, then it has to get a new one 
before it can request to read or write to the cluster.  That way the client 
will never have old information and if you add or remove storage, the client 
will always have the most up to date map to know where the current copies of 
the files are.

This can cause slow downs in your cluster performance if you are updating your 
osdmap frequently, which can be caused by deleting a lot of snapshots as an 
example.

________________________________

[cid:[email protected]]<https://storagecraft.com>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943

________________________________

If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

________________________________

________________________________
From: ceph-users [[email protected]] on behalf of girish 
kenkere [[email protected]]
Sent: Thursday, February 16, 2017 12:43 PM
To: [email protected]
Subject: [ceph-users] Question regarding CRUSH algorithm

Hi, I have a question regarding CRUSH algorithm - please let me know how this 
works. CRUSH paper talks about how given an object we select OSD via two 
mapping - first one is obj to PG and then PG to OSD.

This PG to OSD mapping is something i dont understand. It uses pg#, cluster 
map, and placement rules. How is it guaranteed to return correct OSD for future 
reads after the cluster map/placement rules has changed due to nodes coming and 
out?

Thanks
Girish
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to