https://bugs.launchpad.net/glance/+bug/1415679
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
http://www.pdc.kth.se/members/zashah
___
ceph-users mailing list
ceph
might not be much of an issue but latency certainly will be.
Although bandwidth during a rebalance of data might also be problematic...
Cheers,
Robert van Leeuwen
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
http://www.pdc.kth.se/members/zashah
:
On Thu, Jan 8, 2015 at 5:46 AM, Zeeshan Ali Shah zas...@pdc.kth.se
wrote:
I just finished configuring ceph up to 100 TB with openstack ... Since we
are also using Lustre in our HPC machines , just wondering what is the
bottle neck in ceph going on Peta Scale like Lustre .
any idea
I just finished configuring ceph up to 100 TB with openstack ... Since we
are also using Lustre in our HPC machines , just wondering what is the
bottle neck in ceph going on Peta Scale like Lustre .
any idea ? or someone tried it
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
and rest of disks are on seperate host so should i continue like
[osd.15]
host=server2
...
any hint ? how OSDs find themselves if they are on same host is it via
ports ?
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
/binary-i386/Packages 403
Forbidden
source list is:
deb http://ceph.com/debian-giant/ trusty main
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
http://www.pdc.kth.se/members/zashah
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
http://www.pdc.kth.se/members/zashah
___
ceph-users mailing
Is it possible to have shared RBD ? to form a shared NFS kind of system but
on ceph ?
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
http://www.pdc.kth.se/members/zashah
?
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
http://www.pdc.kth.se/members/zashah
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
Thank Gallati, which means we donot need have to have shared rbd for live
migration ?
On Tue, Dec 23, 2014 at 11:47 AM, René Gallati c...@gallati.net wrote:
Hello,
On 23.12.2014 09:12, Zeeshan Ali Shah wrote:
Has any one tried running instances over ceph i.e using ceph as backend
for vm
Hi,
We have multiple Disks (12) in a single host . Is it possible to run
multiple OSds on single host and attach each OSD with single disk ?
I assum OSD-Daemon listen to particular port which has to be changed in
above case.
any suggestion ?
--
Regards
Zeeshan Ali Shah
System Administrator
12 matches
Mail list logo