Great, Jon. Thanks for your reply. I am looking forward to your report.
Cheers,
Boxiang
On 10/23/2018 22:01,Jon Bernard wrote:
* melanie witt wrote:
On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume
will
On Tue, 23 Oct 2018 10:01:42 -0400, Jon Bernard wrote:
* melanie witt wrote:
On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host
On 10/23/2018 9:01 AM, Jon Bernard wrote:
* melanie witt wrote:
On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host
* melanie witt wrote:
> On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:
> >
> > The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
> > is only for available volume migration between two pools from the same
> > ceph cluster.
> > If the volume is in-use
* melanie witt wrote:
> On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
> > I created a new vm and a new volume with type 'ceph'[So that the volume
> > will be created on one of two hosts. I assume that the volume created on
> > host dev@rbd-1#ceph this time]. Next step is to
On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At
Jay and Melanie, It's my fault to let you misunderstand the problem. I should
describe my problem more clearly. My problem is not to migrate volumes between
two ceph clusters.
I have two clusters, one is openstack cluster(allinone env, hostname is dev)
and another is ceph cluster. Omit the
Boxiang,
I have not herd any discussion of extending this functionality for Ceph
to work between different Ceph Clusters. I wasn't aware, however, that
the existing spec was limited to one Ceph cluster. So, that is good to know.
I would recommend reaching out to Jon Bernard or Eric Harney
On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:
The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
is only for available volume migration between two pools from the same
ceph cluster.
If the volume is in-use status[2], it will call the generic
Hi melanie, thanks for your reply.
The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
is only for available volume migration between two pools from the same ceph
cluster.
If the volume is in-use status[2], it will call the generic migration function.
So that as you
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises
Hi folks,
When I use the LVM backend to create the volume, then attach it to a vm. I
can migrate the volume(in-use) from one host to another. The nova libvirt will
call the 'rebase' to finish it. But if using ceph backend, it raises exception
'Swap only supports host devices'. So now it
12 matches
Mail list logo