On Thu, Jan 8, 2015 at 5:46 AM, Zeeshan Ali Shah <zas...@pdc.kth.se> wrote:
> I just finished configuring ceph up to 100 TB with openstack ... Since we
> are also using Lustre in our HPC machines , just wondering what is the
> bottle neck in ceph going on Peta Scale like Lustre .
>
> any idea ? or someone tried it

If you're talking about people building a petabyte Ceph system, there
are *many* who run clusters of that size. If you're talking about the
Ceph filesystem as a replacement for Lustre at that scale, the concern
is less about the raw amount of data and more about the resiliency of
the current code base at that size...but if you want to try it out and
tell us what problems you run into we will love you forever. ;)
(The scalable file system use case is what actually spawned the Ceph
project, so in theory there shouldn't be any serious scaling
bottlenecks. In practice it will depend on what kind of metadata
throughput you need because the multi-MDS stuff is improving but still
less stable.)
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to