So is there any other alternative for over the WAN deployment .. I hava
use case to connect two Swedish unversities (few hundreds km apart) .
Target is that user from univ A can write to cluster to univ B and can read
the data from other users .
/Zee
On Tue, Jan 13, 2015 at 7:41 AM, Robert
Gregory Farnum greg@... writes:
Ceph isn't really suited for WAN-style distribution. Some users have
high-enough and consistent-enough bandwidth (with low enough latency)
to do it, but otherwise you probably want to use Ceph within the data
centers and layer something else on top of it.
Thanks James, I will look into it
Zeeshan
On Tue, Jan 13, 2015 at 2:00 PM, James wirel...@tampabay.rr.com wrote:
Gregory Farnum greg@... writes:
Ceph isn't really suited for WAN-style distribution. Some users have
high-enough and consistent-enough bandwidth (with low enough latency)
So is there any other alternative for over the WAN deployment ..
I hava use case to connect two Swedish unversities (few hundreds km apart) .
Target is that user from univ A can write to cluster to univ B and can read
the data from other users .
You could have a look at OpenStack Swift: it
On Mon, Jan 12, 2015 at 3:55 AM, Zeeshan Ali Shah zas...@pdc.kth.se wrote:
Thanks Greg, No i am more into large scale RADOS system not filesystem .
however for geographic distributed datacentres specially when network
flactuate how to handle that as i read it seems CEPH need big pipe of
Thanks Greg, No i am more into large scale RADOS system not filesystem .
however for geographic distributed datacentres specially when network
flactuate how to handle that as i read it seems CEPH need big pipe of
network
/Zee
On Fri, Jan 9, 2015 at 7:15 PM, Gregory Farnum g...@gregs42.com
however for geographic distributed datacentres specially when network
flactuate how to handle that as i read it seems CEPH need big pipe of
network
Ceph isn't really suited for WAN-style distribution. Some users have
high-enough and consistent-enough bandwidth (with low enough latency)
to do
On Thu, Jan 8, 2015 at 5:46 AM, Zeeshan Ali Shah zas...@pdc.kth.se wrote:
I just finished configuring ceph up to 100 TB with openstack ... Since we
are also using Lustre in our HPC machines , just wondering what is the
bottle neck in ceph going on Peta Scale like Lustre .
any idea ? or
I just finished configuring ceph up to 100 TB with openstack ... Since we
are also using Lustre in our HPC machines , just wondering what is the
bottle neck in ceph going on Peta Scale like Lustre .
any idea ? or someone tried it
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC