Hello,
On Mon, 6 Mar 2017 16:06:51 +0700 Vy Nguyen Tan wrote:
> Hi Jiajia zhong,
>
> I'm using mixed SSD and HDD on the same node and I did it from url
> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/,
> I don't get any problems when run SSD and HDD on
Hi Jiajia zhong,
I'm using mixed SSD and HDD on the same node and I did it from url
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/,
I don't get any problems when run SSD and HDD on the same node. Now I want
to increase Ceph thoughput by increase network
we are using mixed too, intel PCIE 400G SSD * 8 for metadata pool and tier
caching pool for our cephfs.
*plus:*
*'osd crush update on start = false*' as Vladimir replied.
2017-03-03 20:33 GMT+08:00 Дробышевский, Владимир :
> Hi, Matteo!
>
> Yes, I'm using mixed cluster in
Hi, Matteo!
Yes, I'm using mixed cluster in production but it's pretty small at the
moment. I've made a smal step by step manual for myself when I did this for
the first time and now put it as a gist:
https://gist.github.com/vheathen/cf2203aeb53e33e3f80c8c64a02263bc#file-manual-txt.
Probably it
Hi all,
Does anyone run a production cluster with a modified crush map for create two
pools belonging one to HDDs and one to SSDs.
What’s the best method? Modify the crush map via ceph CLI or via text editor?
Will the modification to the crush map be persistent across reboots and
maintenance