On Mon, Jan 12, 2015 at 3:55 AM, Zeeshan Ali Shah <zas...@pdc.kth.se> wrote:
> Thanks Greg, No i am more into large scale RADOS system not filesystem .
>
> however for geographic distributed datacentres specially when network
> flactuate how to handle that as i read it seems CEPH need big pipe of
> network

Ceph isn't really suited for WAN-style distribution. Some users have
high-enough and consistent-enough bandwidth (with low enough latency)
to do it, but otherwise you probably want to use Ceph within the data
centers and layer something else on top of it.
-Greg

>
> /Zee
>
> On Fri, Jan 9, 2015 at 7:15 PM, Gregory Farnum <g...@gregs42.com> wrote:
>>
>> On Thu, Jan 8, 2015 at 5:46 AM, Zeeshan Ali Shah <zas...@pdc.kth.se>
>> wrote:
>> > I just finished configuring ceph up to 100 TB with openstack ... Since
>> > we
>> > are also using Lustre in our HPC machines , just wondering what is the
>> > bottle neck in ceph going on Peta Scale like Lustre .
>> >
>> > any idea ? or someone tried it
>>
>> If you're talking about people building a petabyte Ceph system, there
>> are *many* who run clusters of that size. If you're talking about the
>> Ceph filesystem as a replacement for Lustre at that scale, the concern
>> is less about the raw amount of data and more about the resiliency of
>> the current code base at that size...but if you want to try it out and
>> tell us what problems you run into we will love you forever. ;)
>> (The scalable file system use case is what actually spawned the Ceph
>> project, so in theory there shouldn't be any serious scaling
>> bottlenecks. In practice it will depend on what kind of metadata
>> throughput you need because the multi-MDS stuff is improving but still
>> less stable.)
>> -Greg
>
>
>
>
> --
>
> Regards
>
> Zeeshan Ali Shah
> System Administrator - PDC HPC
> PhD researcher (IT security)
> Kungliga Tekniska Hogskolan
> +46 8 790 9115
> http://www.pdc.kth.se/members/zashah
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to