I think your best approach would be to create a smaller RBD pool and then 
migrate the 10% of RBD’s that will remain RBD’s into this and then use the old 
pool for just CephFS.

 

From: ceph-users [mailto:[email protected]] On Behalf Of David 
Turner
Sent: 07 January 2017 23:55
To: [email protected]; [email protected]
Subject: Re: [ceph-users] cephfs AND rbds

 

Yes, the reasoning is the number of PGs.  I currently have all of my data 
stored in various RBDs in a pool and am planning to move most of it out of the 
RBDs into CephFS.  The pool would have the exact same use case that it does 
now, just with 90% of it's data in CephFS rather than RBDs.  My osds aren't to 
a point of having too many PGs on them, I just wanted to mitigate the memory 
need of the osd processes.

  _____  


 <https://storagecraft.com> 

David Turner | Cloud Operations Engineer |  <https://storagecraft.com> 
StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943

  _____  


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

  _____  

  _____  

From: Nick Fisk [[email protected]]
Sent: Saturday, January 07, 2017 3:21 PM
To: David Turner; [email protected]
Subject: RE: cephfs AND rbds

Technically I think there is no reason why you couldn’t do this, but I think it 
is unadvisable. There was a similar thread a while back where somebody had done 
this and it caused problems when he was trying to do maintenance/recovery 
further down the line. 

 

I’m assuming you want to do this because you have already created a pool with 
the max number of PG’s per OSD and extra pools would take you further over this 
limit? If it’s the case I would just bump up the limit, it’s not worth the risk.

 

From: ceph-users [mailto:[email protected]] On Behalf Of David 
Turner
Sent: 07 January 2017 00:54
To: [email protected]
Subject: [ceph-users] cephfs AND rbds

 

Can cephfs and rbds use the same pool to store data?  I know you would need a 
separate metadata pool for cephfs, but could they share the same data pool?

  _____  


 
<http://xo4t.mj.am/lnk/AEMAHNQ6k5QAAAAAAAAAAF3gdtwAADNJBWwAAAAAAACRXwBYcWnMqBmri5aAT3OC_B5ECZEPkQAAlBI/1/U0BGpa1QpeR7MvfKmkwuvg/aHR0cHM6Ly9zdG9yYWdlY3JhZnQuY29t>
 

David Turner | Cloud Operations Engineer |  
<http://xo4t.mj.am/lnk/AEMAHNQ6k5QAAAAAAAAAAF3gdtwAADNJBWwAAAAAAACRXwBYcWnMqBmri5aAT3OC_B5ECZEPkQAAlBI/2/0YzmPgAjmKPtnBOdSLpwug/aHR0cHM6Ly9zdG9yYWdlY3JhZnQuY29t>
 StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943

  _____  


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

  _____  




_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to