Dear Guarav,

Please respect everyones time & timezone differences. Flooding the
mail-list won't help

see below,



On 08/18/2016 01:39 AM, Gaurav Goyal wrote:
> Dear Ceph Users,
>
> Awaiting some suggestion please!
>
>
>
> On Wed, Aug 17, 2016 at 11:15 AM, Gaurav Goyal
> <[email protected] <mailto:[email protected]>> wrote:
>
>     Hello Mart,
>
>     Thanks a lot for the detailed information!
>     Please find my response inline and help me to get more knowledge on it
>
>
>     Ceph works best with more hardware. It is not really designed for
>     small scale setups. Of course small setups can work for a PoC or
>     testing, but I would not advise this for production.
>
>     [Gaurav] : We need this setup for PoC or testing. 
>
>     If you want to proceed however, have a good look the manuals or
>     this mailinglist archive and do invest some time to understand the
>     logic and workings of ceph before working or ordering hardware
>
>     At least you want: 
>     - 3 monitors, preferable on dedicated servers
>     [Gaurav] : With my current setup, can i install MON on Host 1 -->
>     Controller + Compute1, Host 2 and Host 3
>
>     - Per disk you will be running an ceph-osd instance. So a host
>     with 2 disks will run 2 osd instances. More OSD process is better
>     performance, but also more memory and cpu usage.
>
>     [Gaurav] : Understood, That means having 1T x 4 would be better
>     than 2T x 2.
>
Yes, more disks will do more IO
>
>
>     - Per default ceph uses a replication factor of 3 (it is possible
>     to set this to 2, but is not advised)
>     - You can not fill up disks to 100%, also data will not distribute
>     even over all disks, expect disks to be filled up (on average)
>     maximum to 60-70%. You want to add more disks once you reach this
>     limit.
>
>     All on all, with a setup of 3 hosts, with 2x2TB disks, this will
>     result in a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB 
>
>     [Gaurav] : As this is going to be a test lab environment, can we
>     change the configuration to have more capacity rather than
>     redundancy? How can we achieve it?
>

Ceph has an excellent documentation. This is easy to find and search for
"the number of replicas", you want to set both "size" and "min_size" to
1 on this case

>     If speed is required, consider SSD's (for data & journals, or only
>     journals).
>
>     In you email you mention "compute1/2/3", please note, if you use
>     the rbd kernel driver, this can interfere with the OSD process and
>     is not advised to run OSD and Kernel driver on the same hardware.
>     If you still want to do that, split it up using VMs (we have a
>     small testing cluster where we do mix compute and storage, there
>     we have the OSDs running in VMs)
>
>     [Gaurav] : within my mentioned environment, How can we split rbd
>     kernel driver and OSD process? Should it be like rbd kernel driver
>     on controller and OSD processes on compute hosts?
>
>     Since my host 1 is controller + Compute1, Can you please share the
>     steps to split it up using VMs and suggested by you.
>

We are running kernel rbd on dom0 and osd's in domu, as well a monitor
in domu.

Regards,

Mart



>
>     Regards
>     Gaurav Goyal 
>
>
>     On Wed, Aug 17, 2016 at 9:28 AM, Mart van Santen
>     <[email protected] <mailto:[email protected]>> wrote:
>
>
>         Dear Gaurav,
>
>         Ceph works best with more hardware. It is not really designed
>         for small scale setups. Of course small setups can work for a
>         PoC or testing, but I would not advise this for production.
>
>         If you want to proceed however, have a good look the manuals
>         or this mailinglist archive and do invest some time to
>         understand the logic and workings of ceph before working or
>         ordering hardware
>
>         At least you want:
>         - 3 monitors, preferable on dedicated servers
>         - Per disk you will be running an ceph-osd instance. So a host
>         with 2 disks will run 2 osd instances. More OSD process is
>         better performance, but also more memory and cpu usage.
>         - Per default ceph uses a replication factor of 3 (it is
>         possible to set this to 2, but is not advised)
>         - You can not fill up disks to 100%, also data will not
>         distribute even over all disks, expect disks to be filled up
>         (on average) maximum to 60-70%. You want to add more disks
>         once you reach this limit.
>
>         All on all, with a setup of 3 hosts, with 2x2TB disks, this
>         will result in a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB
>
>
>         If speed is required, consider SSD's (for data & journals, or
>         only journals).
>
>         In you email you mention "compute1/2/3", please note, if you
>         use the rbd kernel driver, this can interfere with the OSD
>         process and is not advised to run OSD and Kernel driver on the
>         same hardware. If you still want to do that, split it up using
>         VMs (we have a small testing cluster where we do mix compute
>         and storage, there we have the OSDs running in VMs)
>
>         Hope this helps,
>
>         regards,
>
>         mart
>
>
>
>
>         On 08/17/2016 02:21 PM, Gaurav Goyal wrote:
>>
>>         Dear Ceph Users,
>>
>>         I need your help to redesign my ceph storage network.
>>
>>         As suggested in earlier discussions, i must not use SAN
>>         storage. So we have decided to removed it.
>>
>>         Now we are ordering Local HDDs.
>>
>>         My Network would be
>>
>>         Host1 --> Controller + Compute1 Host 2--> Compute2 Host 3 -->
>>         Compute3
>>
>>         Is it right setup for ceph network? For Host1 and Host2 , we
>>         are using 1 500GB disk for OS on each host .
>>
>>         Should we use same size storage disks 500GB *8 for ceph
>>         environment or i can order Disks in size of 2TB for ceph cluster?
>>
>>         Making it
>>
>>         2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>>
>>         12TB in total. replication factor 2 should make it 6 TB?
>>
>>
>>
>>         _______________________________________________
>>         ceph-users mailing list
>>         [email protected] <mailto:[email protected]>
>>         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>         <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>         -- 
>         Mart van Santen
>         Greenhost
>         E: [email protected] <mailto:[email protected]>
>         T: +31 20 4890444 <tel:%2B31%2020%204890444>
>         W: https://greenhost.nl
>
>         A PGP signature can be attached to this e-mail,
>         you need PGP software to verify it. 
>         My public key is available in keyserver(s)
>         see: http://tinyurl.com/openpgp-manual
>         <http://tinyurl.com/openpgp-manual>
>
>         PGP Fingerprint: CA85 EB11 2B70 042D AF66  B29A 6437 01A1 10A3 D3A5
>
>         _______________________________________________ ceph-users
>         mailing list [email protected]
>         <mailto:[email protected]>
>         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>         <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> 
>

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to