>
 >> > -   Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in 
case
 >>
 >> I can use it later for storage)
 >>
 >> OS not? get enterprise ssd as os (I think some recommend it when
 >> colocating monitors, can generate a lot of disk io)
 >
 >Yes, OS. I have no option to get a SSD.
 
one samsung ssd sm863 of 240GB on ebay is 180us$. How much are your 2x 
hdd

 >
 >>
 >> > -   Install CentOS 7.7
 >>
 >> Good choice
 >>
 >> > -   Use 2 vLANs, one for ceph internal usage and another for 
external
 >>
 >> access. Since they've 4 network adapters, I'll try to bond them in 
pairs
 >> to speed up network (1Gb).
 >>
 >> Bad, get 10Gbit, yes really
 >
 >Again, that's not an option. We'll have to use the hardware we got.

Maybe you can try and convince ceph development to optimize for bonding 
on 1gbit
beware of this:  
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
Make sure you test your requirements, because ceph is quite some 
overhead.

 >
 >>
 >> > -   I'll try to use ceph-ansible for installation. I failed to use 
it on
 >>
 >> lab, but it seems more recommended.
 >>
 >> Where did you get it from that ansible is recommended? Ansible is a 
tool
 >> to help you automate deployments, but I have the impression it is 
mostly
 >> used as a 'I do not know how to install something' so lets use 
ansible
 >> tool.
 >
 >From reaing various sites/guides for lab.
 >
 >>
 >> > -   Install Ceph Nautilus
 >>
 >> >
 >>
 >> > -   Each server will host OSD, MON, MGR and MDS.
 >>
 >> > -   One VM for ceph-admin: This wil be used to run ceph-ansible 
and
 >>
 >> maybe to host some ceph services later
 >>
 >> Don't waste a vm on this?
 >
 >You think it is a waste to have a VM for this? Won't I need another 
machine to host other ceph services?

I am not having a vm for ceph admin. Depends on what you are going to 
do, and eg. how much
 memory you have / are using. The thing to beware of is that you could 
get kernel deadlocks 
when running tasks on osd nodes. This is being prevented by using a vm. 
However this all depends on the availability of memory. I didn't 
encounter this 
and others are running also successfully afaik.


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to