On 06/28/2015 05:43 PM, Zhongyue Luo wrote:
About the Ceph binary, this framework pulls a Docker container to run
Ceph so int that case you should custom build and put them in a Docker
image. Our observation are that CephFS is very unstable so we are developing a
file system called RGWFS, which is a HCFS based on RGW.


Interesting strategy. We currently have Ceph-0.94.2 across of each system's btrfs and then install mesos, spark and the other frameworks. The only problem I'm seeing right now is too many writes initiated by both ceph and btrfs. I believe as both mature this will be resolve via some more advanced configuration options and tools. I have not had the time to drill down into this penalty (too many duplicative writes) atm. It's a small node (3-5 slaves). Our goals is a single compute engine for a singular 'Big Science' problem where the cluster scales and memory is managed at the lowest level possible (bare metal strategy).
Our problems are intensively memory bound.


Do keep us informed on your progress. Gentoo affords a fine grain of control over software compilations and tuning the various kernels we
are testing is also an integral part of robustly tuning Cephfs and btrfs.

James




I'll point out your comments and more details of our plan in our README.md

Thanks!

On Mon, Jun 29, 2015 at 2:31 AM, CCAAT <[email protected]
<mailto:[email protected]>> wrote:

    Hello  Zhongyue Luo,


    Well this is very interesting.

    Are you now, or intend to replace HDFS with cephfs?
    That is cephfs is the distributed file sustem upon which
    mesos and the frameworks run?

    Please clarify exactly what your plans are and the architecture and
    platforms you intend to support.

    Regardless, this is great news.....



    Also, on gentoo I have these (flag) options for Ceph:

    {babeltrace cryptopp debug fuse gtk +libaio libatomic lttng +nss
    radosgw static-libs tcmalloc xfs zfs}

    Which options do you currently use/support and what are your long range
    plans for ceph(fs)?

    Ceph supports RDMA. Do you have plans in your ceph projects to
    support RDMA?


    James






    On 06/28/2015 10:31 AM, Zhongyue Luo wrote:

        Hi list,

        Me and my colleges developed a framwork called ceph-mesos. As you've
        already predicted from its name, the framework is for scaling Ceph
        clusters on Mesos.

        Check out the code on Github
        <https://github.com/Intel-bigdata/ceph-mesos>

        We've just announced this at the 2nd Beijing MUG meetup.Here is
        the link
        to the presentation
        
<https://docs.google.com/presentation/d/1AzcOD9Aug6BrWevdpHXgyXlcHUzVq2_MMRNvutLelJY/edit?usp=sharing>.

        There is also a demo video
        <http://v.youku.com/v_show/id_XMTI3MjMxNTU5Ng==.html>. Audio is in
        Chinese but you won't have a problem following through even if
        you mute.

        Thanks.


        --
        *Intel SSG/STO/BDT*
        880 Zixing Road, Zizhu Science Park, Minhang District, 200241,
        Shanghai,
        China
        +862161166500 <tel:%2B862161166500>





--
*Intel SSG/STO/BDT*
880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
China
+862161166500

Reply via email to