Thanks all for the suggestions.

Our storage nodes have plenty of RAM and their only purpose is to host the
OSD daemons, so we will not create a swap partition on provisioning.

For the OS disk we will then use a software raid 1 to handle eventually
disk failures. For provisioning the hosts we use kickstart and then Ansible
to install an prepare the hosts to be ready to for ceph-deploy.

Thanks all for our opinions and suggestions helped a lot.

Best,
Martin

On Sat, Sep 19, 2015 at 6:14 AM, Robert LeBlanc <[email protected]>
wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> If you decide to use swap, be warned that significant parts of the OSD
> code can be swapped out even without memory pressure. This has caused
> OSD processes to take 5 minutes to shut down in my experience. I would
> recommend tuning swappiness in this case. My strongest recommendation
> is not to have swap if it is a pure OSD node.
> - ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>
>
> On Fri, Sep 18, 2015 at 8:15 PM, 张冬卯  wrote:
> > yes, a raid1 system disk is necessary, from my perspective.
> >
> > And a swap partition is still  needed even though the memory is big.
> > Martin Palma  于 2015年9月18日,下午11:07写道: Hi,
> >
> > Is it a good idea to use a software raid for the system disk (Operating
> > System) on a Ceph storage node? I mean only for the OS not for the OSD
> > disks.
> >
> > And what about a swap partition? Is that needed?
> >
> > Best,
> > Martin
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
> -----BEGIN PGP SIGNATURE-----
> Version: Mailvelope v1.1.0
> Comment: https://www.mailvelope.com
>
> wsFcBAEBCAAQBQJV/OEjCRDmVDuy+mK58QAAzWMQAI/sVjPQP+sJPPAl/7lq
> r75SccEO5i7rGZuTrrDn6kl1p2Nackb9o0PKh0tnyOSgouO2Jj3IOvqlnv/S
> t40SxkoDoaG2//JmBHZs2BtOUWQCBh4R9h5UjwROXvByKBCNzzH+tsrJ4cST
> 7HdVNB3Cjg/kcxC7kB88OOF1Kl8mrPSsbb9kG1RSlqqCFh82pZ3GJjDpNP30
> mCQHMcTWwlYoAx+5Lii0cCks5Csc78BAv7gFtv47NAvXyRT5LPN8ZvXYv71N
> Bm5pqkIC5H38nVrTb9UCyrVdFmL7M9FeaEaJ+CvNOZFYVgjrAUWNw0LGAgey
> yAH9q8GHEJAmvtZLWebCXMucLmNM6LUDySuxgN2sx8upNFg57Zwz8zLFmitp
> mvp8YdWS+blSk8gyMWFMLUbtIPu6QepzYGpY5lKy6HJI4pqzd5g8HK9gIjjO
> 0EY330T7KE03HjQS4Nuj3xmSWeY5lOQ1sSMANBACXLtDpgTQ8/rQpni9DQBP
> VHyk0t3DWwE83MbF6T8o1h+vS06BIeVc4mOehAxvmsSMITmORejJmLQtu3sS
> bUEWDZS7KcVPj0/FIqGJbf4d7CIYKfDouNUb1J2aXjz9YwK9CbfFZzHbGIfT
> gKsgbL8wHsHNIjOlQVbvSgJ+CjRz6H7xO3hHVUrYL4pXLjSt+VJodfB7azlk
> 8Dq9
> =ny7s
> -----END PGP SIGNATURE-----
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to