Re: [openstack-dev] [cinder] [nova] Consistency groups?

2014-11-17 Thread Philipp Marek
Hello Xing,

 Do you have a libvirt volume driver on the Nova side for DRBD?
No, we don't. We'd just use the existing DRBD 9 kernel module to
provide the local block devices.


 Regarding getting consistency group information to the Nova nodes, can 
 you help me understand the steps you need to go through?
 
 1. Create a consistency group
 2. Create a volume and add volume to the group
Repeat the above step until all volumes are created and added to the 
group
 3. Attach volume in the group
 4. Create a snapshot of the consistency group
The question I'm asking right now isn't about snapshots.

 Do you setup the volume on the Nova side at step 3?  We currently don't 
 have a group level API that setup all volumes in a group.  Is it 
 possible for you to detect whether a volume is in a group or not when 
 attaching one volume and setup all volumes in the same group?  
Well, our Cinder driver passes some information to the Nova nodes; within 
that information block we can pass the consistency group (which will be the 
DRBD resource name) as well, to detect that case.

 Otherwise, it sounds like we need to add a group level API for this 
 purpose.
Perhaps just adding a volume is in consistency group X data item would be 
enough, too?


Sorry about being so vague; I'm just not familiar enough with all the 
interdependencies from Cinder to Nova.


Regards,

Phil


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Consistency groups?

2014-11-14 Thread yang, xing
Hi Phil,

Do you have a libvirt volume driver on the Nova side for DRBD?  Regarding 
getting consistency group information to the Nova nodes, can you help me 
understand the steps you need to go through?

1. Create a consistency group
2. Create a volume and add volume to the group
   Repeat the above step until all volumes are created and added to the group
3. Attach volume in the group
4. Create a snapshot of the consistency group

Do you setup the volume on the Nova side at step 3?  We currently don't have 
a group level API that setup all volumes in a group.  Is it possible for you 
to detect whether a volume is in a group or not when attaching one volume and 
setup all volumes in the same group?  Otherwise, it sounds like we need to 
add a group level API for this purpose.


Thanks,

Xing



-Original Message-
From: Philipp Marek [mailto:philipp.ma...@linbit.com] 
Sent: Friday, November 14, 2014 2:58 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [cinder] [nova] Consistency groups?

Hi,

I'm working on the DRBD Cinder driver, and am looking at the Nova side, too. Is 
there any idea how Cinder's consistency groups should be used on the Nova nodes?

DRBD has easy support for consistency groups (a DRBD resource is a collection 
of DRBD volumes that share a single, serialized connection) and so can 
guarantee write consistency across multiple volumes. 

[ Which does make sense anyway; eg. with multiple iSCSI
  connections one could break down because of STP or
  other packet loss, and then the storage backend and/or
  snapshots/backups/etc. wouldn't be consistent anymore.]


What I'm missing now is a way to get the consistency group information to the 
Nova nodes. I can easily put such a piece of data into the transmitted 
transport information (along with the storage nodes' IP addresses etc.) and use 
it on the Nova side; but that also means that on the Nova side there'll be 
several calls to establish the connection, and several for tear down - and (to 
exactly adhere to the API contract) I'd have to make sure that each individual 
volume is set up (and closed) in exactly that order again.

That means quite a few unnecessary external calls, and so on.


Is there some idea, proposal, etc., that says that
   *within a consistency group*
all volumes *have* to be set up and shutdown 
   *as a single logical operation*?
[ well, there is one now ;]


Because in that case all volume transport information can (optionally) be 
transmitted in a single data block, with several iSCSI/DRBD/whatever volumes 
being set up in a single operation; and later calls (for the other volumes in 
the same group) can be simply ignored as long as they have the same transport 
information block in them.


Thank you for all pointers to existing proposals, ideas, opinions, etc.


Phil

--
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] Consistency groups?

2014-11-13 Thread Philipp Marek
Hi,

I'm working on the DRBD Cinder driver, and am looking at the Nova side, 
too. Is there any idea how Cinder's consistency groups should be used on 
the Nova nodes?

DRBD has easy support for consistency groups (a DRBD resource is 
a collection of DRBD volumes that share a single, serialized connection) 
and so can guarantee write consistency across multiple volumes. 

[ Which does make sense anyway; eg. with multiple iSCSI
  connections one could break down because of STP or
  other packet loss, and then the storage backend and/or
  snapshots/backups/etc. wouldn't be consistent anymore.]


What I'm missing now is a way to get the consistency group information 
to the Nova nodes. I can easily put such a piece of data into the 
transmitted transport information (along with the storage nodes' IP 
addresses etc.) and use it on the Nova side; but that also means that
on the Nova side there'll be several calls to establish the connection,
and several for tear down - and (to exactly adhere to the API contract)
I'd have to make sure that each individual volume is set up (and closed)
in exactly that order again.

That means quite a few unnecessary external calls, and so on.


Is there some idea, proposal, etc., that says that
   *within a consistency group*
all volumes *have* to be set up and shutdown 
   *as a single logical operation*?
[ well, there is one now ;]


Because in that case all volume transport information can (optionally) be 
transmitted in a single data block, with several iSCSI/DRBD/whatever
volumes being set up in a single operation; and later calls (for the other 
volumes in the same group) can be simply ignored as long as they have the
same transport information block in them.


Thank you for all pointers to existing proposals, ideas, opinions, etc.


Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev