On 05/30/2013 07:37 AM, Martin Mailand wrote:
Hi Josh,
I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without
with
cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
in your nova.conf.
Josh
-martin
On 30.05.2013 22:22, Martin Mailand wrote:
Hi Josh,
On 30.05.2013 21:17, Josh Durgin wrote:
It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing
-e1565054a2d3 10240M 2
root@controller:~/vm_images#
-martin
On 30.05.2013 22:56, Josh Durgin wrote:
On 05/30/2013 01:50 PM, Martin Mailand wrote:
Hi Josh,
I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip
On 05/30/2013 02:50 PM, Martin Mailand wrote:
Hi Josh,
now everything is working, many thanks for your help, great work.
Great! I added those settings to
http://ceph.com/docs/master/rbd/rbd-openstack/ so it's easier to figure
out in the future.
-martin
On 30.05.2013 23:24, Josh Durgin
On 04/11/2013 11:08 PM, 陈雷 wrote:
Hi, Guys
Recently I'm testing the new version of OpenStack Grizzly. I'm using Ceph
RBD as the backend storage both for Glance and Cinder.
In the old version of Folsom, when set the parameter
‘show_image_direct_url=True’ in the config file glance-api.conf, the
Ceph has been officially production ready for block (rbd) and object
storage (radosgw) for a while. It's just the file system that isn't
ready yet:
http://ceph.com/docs/master/faq/#is-ceph-production-quality
Josh
On 01/31/2013 01:23 PM, Razique Mahroua wrote:
Speaking of which guys,
anything
On 07/23/2012 08:24 PM, Jonathan Proulx wrote:
Hi All,
I've been looking at Ceph as a storage back end. I'm running a
research cluster and while people need to use it and want it 24x7 I
don't need as many nines as a commercial customer facing service does
so I think I'm OK with the current
On 07/24/2012 01:04 PM, Mark Moseley wrote:
This is more of a sanity check than anything else:
Does the RBDDriver in Diablo support live migration?
Live migration has always been possible with RBD. Management layers
like libvirt or OpenStack may have bugs that make it fail. This sounds
like
On 07/24/2012 05:10 PM, Mark Moseley wrote:
It should work, and if that workaround works, you could instead add
def check_for_export(self, context, volume_id):
pass
I'll try that out. That's a heck of a lot cleaner, plus I just picked
that if not volume[ 'iscsi_target' ] because it was
/nova/+question/201366
- Travis
On Fri, May 25, 2012 at 8:32 PM, Josh Durgin josh.dur...@inktank.comwrote:
On 05/25/2012 01:31 AM, Sébastien Han wrote:
Hi everyone,
I setup a ceph cluster and I use the RBD driver for nova-volume.
I can create volumes and snapshots but currently I can't attach
Hi Florian,
There's an Ubuntu bug already:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/981130
librgw was not complete, and wasn't actually used by radosgw, so it was
dropped. The Ubuntu package just needs to be updated to remove the
dependency and rgw.py, like upstream did.
Josh
On
On 05/25/2012 01:31 AM, Sébastien Han wrote:
Hi everyone,
I setup a ceph cluster and I use the RBD driver for nova-volume.
I can create volumes and snapshots but currently I can't attach them to
an instance.
Apparently the volume is detected as busy but it's not, no matter which
name I choose.
At the last couple volumes meetings, there was some discussion of the
desired behaviors of nova's boot from volume functionality. There were
a few use cases discussed, but there are probably plenty we didn't
think of, so it would be useful to get input from a larger audience.
What use cases for
On 05/02/2011 01:46 PM, Chuck Thier wrote:
This leads to another interesting question. While our reference
implementation may not directly expose snapshot functionality, I imagine
other storage implementations could want to. I'm interested to hear what use
cases others would be interested in
14 matches
Mail list logo