On 07/07/2014 04:09 PM, Lazuardi Nasution wrote:
Any calculation of required memory of MDS nodes related to the OSD nodes

MDS memory usage depends on the amount of inodes it caches. By default this is 100k, but I'm not sure how many (kilo)bytes it uses for a single inode.

But be aware, CephFS is still under heavy development, so a lot can change!

Wido

total capacity?

    Date: Sun, 06 Jul 2014 15:00:14 +0200
    From: Wido den Hollander <[email protected] <mailto:[email protected]>>
    To: [email protected] <mailto:[email protected]>
    Subject: Re: [ceph-users] Combining MDS Nodes
    Message-ID: <[email protected]
    <mailto:[email protected]>>
    Content-Type: text/plain; charset=ISO-8859-1; format=flowed

    On 07/06/2014 02:42 PM, Lazuardi Nasution wrote:
     > Hi,
     >
     > Is it possible to combine MDS and MON or OSD inside the same
    node? Which
     > one is better, MON with MDS or OSD with MDS?
     >

    Yes, not a problem. Be aware, the MDS can be memory hungry depending on
    your active data set.

    There is no golden rule for mixing daemons however.

    CephFS is not stable yet however, keep that in mind!

     > How do I configure OSD and MDS to allow two kind of public network
     > connections, standard (1 GbEs) and jumbo (10 GbEs). I want to take
     > advantage of jumbo frame for some supported clients.

    That's not possible. There can only be one public network, so your idea
    is not feasible.

     >
     > Best regards,

    --
    Wido den Hollander
    42on B.V.
    Ceph trainer and consultant

    Phone: +31 (0)20 700 9902 <tel:%2B31%20%280%2920%20700%209902>
    Skype: contact42on



_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to