Re: [openstack-dev] [Cinder] Questions about proposed volume replication patch

2014-02-23 Thread Avishay Traeger
Hi Bruce,

Bruce Montague bruce_monta...@symantec.com wrote on 02/23/2014 03:35:10 
AM:
 Hi, regarding the proposed Cinder volume replication patch,  
 https://review.openstack.org/#/c/64026  :
 
 The replication driver methods are create_replica(), swap_replica(),
 delete_replica(),
 replication_status_check(), enable_replica(), and disable_replica().
 
 What are the expected semantics of the enable and disable methods?  In 
 enable_vol_replication() it looks like the intent is that replicas 
 are created by
 create, than started by enable (and vice versa for disable/delete).

One of the challenges in the replication design was creating a driver API 
that would work for all backends.  One way of doing so was to allow the 
driver to execute on both sides of the replication.  So when creating a 
replicated volume we have:
1. primary backend: create_volume
2. secondary backend: create_replica
3. primary backend: enable_replica

When deleting a replicated volume we have the opposite:
1. primary backend: disable_replica
2. secondary backend: delete_replica
3. primary backend: delete_volume

The goal here is to be flexible and allow all drivers to implement 
replication.  If you look at the patch for IBM Storwize/SVC replication (
https://review.openstack.org/#/c/70792/) you'll see two main replication 
modes supported in replication.py.  The first (starting on line 58) simply 
requires making a copy of the volume in the proper pool, and so only 
create_replica and delete_replica are implemented there.  The second 
method (starting on line 118) implements all of the functions: 
create_replica creates a second volume, and enable_replica creates a 
replication relationship between the two volumes.
 
 Are the plugin's enable/disable method intended for just a one-time 
start and
 stop of the replication or are they expected to be able to cleanly pause 
and
 resume the replication process? Is disable expected to flush volume 
contents
 all the way through to the replica?

As of now you can assume that create_replica and enable_replica are called 
together, and disable_replica and delete_replica are also called together, 
in those orders.  So if we call disable_replica you can assume we are 
getting rid of the replica.
 
 Another question is what is the expected usage of 
 primary_replication_unit_id
 and secondary_replication_unit_id in the replication_relationships 
table.
 Are these optional? Are they the type of fields that could go in the
 driver_data
 field for the relationship?

Those two fields are filled in automatically - see replication_update_db() 
in scheduler/driver.py
They simply hold whatever the driver returns in 'replication_unit_id', 
which will likely be needed by drivers to know who the other side is.
In addition, you can put whatever you like driver_data to implement 
replication for you backend.

Thanks,
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Questions about proposed volume replication patch

2014-02-22 Thread Bruce Montague
Hi, regarding the proposed Cinder volume replication patch,  
https://review.openstack.org/#/c/64026  :

The replication driver methods are create_replica(), swap_replica(), 
delete_replica(),
replication_status_check(), enable_replica(), and disable_replica().

What are the expected semantics of the enable and disable methods?  In 
enable_vol_replication() it looks like the intent is that replicas are created 
by
create, than started by enable (and vice versa for disable/delete).

Are the plugin's enable/disable method intended for just a one-time start and
stop of the replication or are they expected to be able to cleanly pause and
resume the replication process? Is disable expected to flush volume contents
all the way through to the replica?


Another question is what is the expected usage of primary_replication_unit_id
and secondary_replication_unit_id in the replication_relationships table.
Are these optional? Are they the type of fields that could go in the 
driver_data
field for the relationship?


Thanks,

 -bruce


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev