Re: [openstack-dev] [cinder] documenting volume replication

2015-02-15 Thread Ronen Kat
Hi Ruijing,

Thanks for the comments.
Re (1) - driver can implement replication
in any means the driver see fit. It can be exported and be available to
the scheduler/drive via the capabilities or driver
extra-spec prefixes.
Re (3) - Not sure I see how this relates
to storage side replication, do you refer to host side replication?

Ronen



From:   
Guo, Ruijing
ruijing@intel.com
To:   
OpenStack Development
Mailing List (not for usage questions) openstack-dev@lists.openstack.org
Date:   
15/02/2015 03:41 AM
Subject:  
 Re: [openstack-dev]
[cinder] documenting volume replication




Hi, Ronen,

I dont know how to edit
https://etherpad.openstack.org/p/cinder-replication-redoc
and add some comments in email.

1.   We may add asynchronized
and synchronized type for replication.
2.   We may add CG for replication
3.   We may add to initialize
connection for replication

Thanks,
-Ruijing

From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Tuesday, February 3, 2015 9:41 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [cinder] documenting volume replication

As some of you are aware the spec for replication
is not up to date, 
The current developer documentation, http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
cover replication but some folks indicated that it need additional details.


In order to get the spec and documentation up to date I created an Etherpad
to be a base for the update.

The Etherpad page is on https://etherpad.openstack.org/p/cinder-replication-redoc


I would appreciate if interested parties would take a look at the Etherpad,
add comments, details, questions and feedback.


Ronen, __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] volume replication

2015-02-15 Thread Ronen Kat
Ruijing,
hi,

Are you discussing the network/fabric
between Storage A and Storage B? 
If so, assumption in Cinder is that
this is done in advance by the storage administrator.
The design discussions for replication
resulted in that the driver is fully responsible for replication and it
is up to the driver to implement and manage replication on its own.
Hence, all vendor specific setup actions
like creating volume pools, setup network on the storage side are considered
prerequisite actions and outside the scope of the Cinder flows.

If someone feels that is not the case,
or should not be the case, feel free to chime in.

Or does this relates to setting up the
data path for accessing both Storage A and Storage B?
Should this be setup in advance? When
we attach the primary volume to the VM? Or when promoting the replica to
be primary?

-- Ronen



From:   
Guo, Ruijing
ruijing@intel.com
To:   
OpenStack Development
Mailing List (not for usage questions) openstack-dev@lists.openstack.org
Date:   
16/02/2015 02:29 AM
Subject:  
 Re: [openstack-dev]
[cinder] documenting volume replication




Hi, Ronen

3) I mean storage based replication.
In normal, volume replication support FC or iSCSI. We need to setup FC
or iSCSI before we do volume replication.

Case 1) 

Host --FC--Storage
A ---iSCSI  Storage B FC- Host

Case 2)

Host --FC--Storage
A ---FC  Storage B FC- Host

As above diagram, we need
to setup connection (iSCSI or FC) between storage A and Storage B.

For FC, we need to zone storage
A  storage B in FC switch.

Thanks,
-Ruijing

From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Sunday, February 15, 2015 4:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] documenting volume replication

Hi Ruijing,


Thanks for the comments. 
Re (1) - driver can implement replication in any means the driver see fit.
It can be exported and be available to the scheduler/drive via the capabilities
or driver extra-spec prefixes.

Re (3) - Not sure I see how this relates to storage side replication, do
you refer to host side replication?


Ronen 



From:Guo,
Ruijing ruijing@intel.com

To:OpenStack
Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org

Date:15/02/2015
03:41 AM 
Subject:Re:
[openstack-dev] [cinder] documenting volume replication






Hi, Ronen, 
 
I dont know how to edit https://etherpad.openstack.org/p/cinder-replication-redoc
and add some comments in email.

 
1.   We may add asynchronized and synchronized type for replication.

2.   We may add CG for replication

3.   We may add to initialize connection for replication

 
Thanks, 
-Ruijing 
 
From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Tuesday, February 3, 2015 9:41 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [cinder] documenting volume replication

 
As some of you are aware the spec for replication is not up to date, 
The current developer documentation, http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
cover replication but some folks indicated that it need additional details.


In order to get the spec and documentation up to date I created an Etherpad
to be a base for the update.

The Etherpad page is on https://etherpad.openstack.org/p/cinder-replication-redoc


I would appreciate if interested parties would take a look at the Etherpad,
add comments, details, questions and feedback.


Ronen, __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] volume replication

2015-02-15 Thread Ronen Kat
Good question, I have:

https://etherpad.openstack.org/p/cinder-replication-redoc
https://etherpad.openstack.org/p/cinder-replication-cg
https://etherpad.openstack.org/p/volume-replication-fix-planning

Jay seems to be the champion for moving
replication forward, I will let Jay point the way.

-- Ronen



From:   
Zhipeng Huang zhipengh...@gmail.com
To:   
OpenStack Development
Mailing List (not for usage questions) openstack-dev@lists.openstack.org
Date:   
16/02/2015 09:14 AM
Subject:  
 Re: [openstack-dev]
[cinder] volume replication




Hi Ronen,

Xingyang mentioned there's another etherpad on rep and
CG, which etherpad should we mainly follow ?

On Mon, Feb 16, 2015 at 2:38 PM, Ronen Kat ronen...@il.ibm.com
wrote:
Ruijing,
hi, 

Are you discussing the network/fabric between Storage A and Storage B?

If so, assumption in Cinder is that this is done in advance by the storage
administrator. 
The design discussions for replication resulted in that the driver is fully
responsible for replication and it is up to the driver to implement and
manage replication on its own. 
Hence, all vendor specific setup actions like creating volume pools, setup
network on the storage side are considered prerequisite actions and outside
the scope of the Cinder flows. 

If someone feels that is not the case, or should not be the case, feel
free to chime in. 

Or does this relates to setting up the data path for accessing both Storage
A and Storage B? 
Should this be setup in advance? When we attach the primary volume to the
VM? Or when promoting the replica to be primary? 

-- Ronen 



From:Guo,
Ruijing ruijing@intel.com

To:OpenStack
Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org

Date:16/02/2015
02:29 AM 
Subject:Re:
[openstack-dev] [cinder] documenting volume replication





Hi, Ronen 
 
3) I mean storage based replication. In normal, volume replication support
FC or iSCSI. We need to setup FC or iSCSI before we do volume replication.

 
Case 1)  
 
Host --FC--Storage A ---iSCSI  Storage B FC-
Host 
 
Case 2) 
 
Host --FC--Storage A ---FC  Storage B FC- Host

 
As above diagram, we need to setup connection (iSCSI or FC) between storage
A and Storage B. 
 
For FC, we need to zone storage A  storage B in FC switch.

 
Thanks, 
-Ruijing 
 
From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Sunday, February 15, 2015 4:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] documenting volume replication

 
Hi Ruijing,


Thanks for the comments. 
Re (1) - driver can implement replication in any means the driver see fit.
It can be exported and be available to the scheduler/drive via the capabilities
or driver extra-spec prefixes.

Re (3) - Not sure I see how this relates to storage side replication, do
you refer to host side replication?


Ronen 



From:Guo,
Ruijing ruijing@intel.com

To:OpenStack
Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org

Date:15/02/2015
03:41 AM 
Subject:Re:
[openstack-dev] [cinder] documenting volume replication







Hi, Ronen, 

I dont know how to edit https://etherpad.openstack.org/p/cinder-replication-redoc
and add some comments in email.


1.   We may add asynchronized and synchronized type for replication.

2.   We may add CG for replication

3.   We may add to initialize connection for replication


Thanks, 
-Ruijing 

From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Tuesday, February 3, 2015 9:41 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [cinder] documenting volume replication


As some of you are aware the spec for replication is not up to date, 
The current developer documentation, http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
cover replication but some folks indicated that it need additional details.


In order to get the spec and documentation up to date I created an Etherpad
to be a base for the update.

The Etherpad page is on https://etherpad.openstack.org/p/cinder-replication-redoc


I would appreciate if interested parties would take a look at the Etherpad,
add comments, details, questions and feedback.


Ronen, __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List

[openstack-dev] [cinder] documenting volume replication

2015-02-03 Thread Ronen Kat
As some of you are aware the spec for replication
is not up to date, 
The current developer documentation,
http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
cover replication but some folks indicated that it need additional details.

In order to get the spec and documentation
up to date I created an Etherpad to be a base for the update.
The Etherpad page is on https://etherpad.openstack.org/p/cinder-replication-redoc

I would appreciate if interested parties
would take a look at the Etherpad, add comments, details, questions and
feedback.

Ronen,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Volume replication - driver support walk through

2014-07-24 Thread Ronen Kat
Hello,

The initial code for managing volume replication in Cinder is now 
available as work-in-progress - see 
https://review.openstack.org/#/c/106718
I expect to remove the work-in-progress early next week.

I would like to hold a walk through of the replication feature for Cinder 
driver owners who are interested to implement replication - I plan to hold 
it on Wednesday July 30 17:00 UTC, just after the Cinder meeting.
I will make available a phone call-in number and access details, as I 
don't think Google Hangouts can support enough video connections (ten to 
the best of my knowledge).
Alternative suggestions are welcome

For those who cannot attend the 17:00 UTC walk through (due to time zone 
issues), I can hold another one on July 31, 08:00 UTC - please let me know 
if there is interest for this time slot.

Regards,

Ronen,___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][replication-api] extra_specs too constant

2014-07-11 Thread Ronen Kat
Philipp,

Thanks for the feedback, below if my view, and I would like to hear what 
others think.

I would typically expect the replication_partners to be created/computed 
by the driver from the underlaying replication mechanism.
I assume DRBD know with whom he is current enabled for replication - I 
don't think this should be kept in the Cinder DB (in the extra specs of 
the volume-type).

In the extra specs we may find replica_volume_backend_name, but I expect 
it to be a short list.

As for the case of multiple appropriate replication targets, the current 
plan is to choose the 1st eligible , but we can change it to be a random 
entry from the list, if you think that is appropriate.

Regarding actual replication_rpo_range and network bandwidth, I think 
the current suggestion is a reasonable 1st step.
Multiple considerations will of course impact the actual RPO, but I think 
this is outside the scope of this 1st revision - I would like to see this 
mechanism enhanced in the next revision.

Ronen,



From:   Philipp Marek philipp.ma...@linbit.com
To: openstack-dev@lists.openstack.org, 
Cc: Ronen Kat/Haifa/IBM@IBMIL
Date:   11/07/2014 04:10 PM
Subject:[openstack-dev][cinder][replication-api] extra_specs too 
constant



I think that extra_specs in the database is too static, too hard to 
change.


In the case of eg. DRBD, where many nodes may provide some storage space, 
the 
list replication_partners is likely to change often, even if only newly 
added nodes have to be done[1]

This means that
  a) the admin has to add each node manually
  b) volume_type_extra_specs:value is a VARCHAR(255), which can only 
provide 
  a few host names. (With FQDN even more so.)

What if the list of hosts would be matched by each one saying I'm product 
XYZ 
version compat N-M (eg. via get_volume_stats), and all nodes that report 
the same product with an overlapping version range are considered eligible 

for replication?


Furthermore, replication_rpo_range might depend on other circumstances 
too... if the network connection to the second site is heavily loaded, the 

RPO will vary, too - from a few seconds to a few hours.

So, should we announce a range of (0,7200)?


Ad 1: because Openstack sees by itself which nodes are available.


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2014-06-19 Thread Ronen Kat
The use-case for block migration in Libvirt/QEMU is to allow migration 
between two different back-ends.
This is basically a host based volume migration, ESXi has a similar 
functionality (storage vMotion), but probably not enabled with OpenStack.
Btw, if the Cinder volume driver can migrate the volume by itself, the 
Libvirt/QEMU is not called upon, but if it can't (different vendor boxes 
don't talk to each other), then Cinder asks Nova to help move the data...

If you are missing this host based process you are basically have a data 
lock-in on a specific back-end - the use case could be storage 
evacuation, or just moving the data to a different box.

Ronen,



From:   Daniel P. Berrange berra...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   19/06/2014 11:42 AM
Subject:Re: [openstack-dev] [nova][libvirt] Block migrations and 
Cinder volumes



On Wed, Jun 18, 2014 at 11:09:33PM -0700, Rafi Khardalian wrote:
 I am concerned about how block migration functions when Cinder volumes 
are
 attached to an instance being migrated.  We noticed some unexpected
 behavior recently, whereby attached generic NFS-based volumes would 
become
 entirely unsparse over the course of a migration.  After spending some 
time
 reviewing the code paths in Nova, I'm more concerned that this was 
actually
 a minor symptom of a much more significant issue.
 
 For those unfamiliar, NFS-based volumes are simply RAW files residing on 
an
 NFS mount.  From Libvirt's perspective, these volumes look no different
 than root or ephemeral disks.  We are currently not filtering out 
volumes
 whatsoever when making the request into Libvirt to perform the 
migration.
  Libvirt simply receives an additional flag (VIR_MIGRATE_NON_SHARED_INC)
 when a block migration is requested, which applied to the entire 
migration
 process, not differentiated on a per-disk basis.  Numerous guards within
 Nova to prevent a block based migration from being allowed if the 
instance
 disks exist on the destination; yet volumes remain attached and within 
the
 defined XML during a block migration.
 
 Unless Libvirt has a lot more logic around this than I am lead to 
believe,
 this seems like a recipe for corruption.  It seems as though this would
 also impact any type of volume attached to an instance (iSCSI, RBD, 
etc.),
 NFS just happens to be what we were testing.  If I am wrong and someone 
can
 correct my understanding, I would really appreciate it.  Otherwise, I'm
 surprised we haven't had more reports of issues when block migrations 
are
 used in conjunction with any attached volumes.

Libvirt/QEMU has no special logic. When told to block-migrate, it will do
so for *all* disks attached to the VM in read-write-exclusive mode. It 
will
only skip those marked read-only or read-write-shared mode. Even that
distinction is somewhat dubious and so not reliably what you would want.

It seems like we should just disallow block migrate when any cinder 
volumes
are attached to the VM, since there is never any valid use case for doing
block migrate from a cinder volume to itself.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ 
:|
|: http://libvirt.org  -o- http://virt-manager.org 
:|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ 
:|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc 
:|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Volume replication design session

2014-05-16 Thread Ronen Kat
Hello,

For those who attended the design session on volume replication, thank 
you, for those who didn't., the Etherpad with the discussion notes is 
available for your reference at 
https://etherpad.openstack.org/p/juno-cinder-volume-replication

During the session there were people who indicated that they would like to 
see more features for volume replication, so it would support additional 
scenarios.
If you are among them, and you are willing to document what is missing, 
what scenarios and use-cases and not being properly addresses, and even 
suggest how we could address them, please document that on the Etherpad.
I created a section at the end to capture all these suggestions - while we 
may not be able to address all comments, that would be a head start for 
moving beyond this first step.

Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disaster Recovery for OpenStack - plan for Juno design summit - discussion reminder

2014-03-31 Thread Ronen Kat
For those who are interested we will discuss the disaster recovery 
use-cases and how to proceed toward the Juno summit on April 2 at 17:00 
UTC (1PM ET) - invitation below.
Agenda and previous discussion history in the Etherpad link below.


Call-in: 
https://www.teleconference.att.com/servlet/glbAccess?process=1accessCode=6406941accessNumber=1809417783#C2
 

Passcode: 6406941 

Etherpad: 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders 

Wiki: https://wiki.openstack.org/wiki/DisasterRecovery


Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com


invite-201404021700UTC.ics
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disaster Recovery for OpenStack - call for stakeholder - discussion reminder

2014-03-19 Thread Ronen Kat
For those who are interested we will discuss the disaster recovery 
use-cases and how to proceed toward the Juno summit on March 19 at 17:00 
UTC (invitation below)



Call-in: 
https://www.teleconference.att.com/servlet/glbAccess?process=1accessCode=6406941accessNumber=1809417783#C2
 

Passcode: 6406941

Etherpad: 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders
Wiki: https://wiki.openstack.org/wiki/DisasterRecovery

Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com




From:   Luohao (brian) brian.luo...@huawei.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   14/03/2014 03:59 AM
Subject:Re: [openstack-dev] Disaster Recovery for OpenStack - call 
for stakeholder



1.  fsfreeze with vss has been added to qemu upstream, see 
http://lists.gnu.org/archive/html/qemu-devel/2013-02/msg01963.html for 
usage.
2.  libvirt allows a client to send any commands to qemu-ga, see 
http://wiki.libvirt.org/page/Qemu_guest_agent
3.  linux fsfreeze is not equivalent to windows fsfreeze+vss. Linux 
fsreeze offers fs consistency only, while windows vss allows agents like 
sqlserver to register their plugins to flush their cache to disk when a 
snapshot occurs.
4.  my understanding is xenserver does not support fsfreeze+vss now, 
because xenserver normally does not use block backend in qemu.

-Original Message-
From: Bruce Montague [mailto:bruce_monta...@symantec.com] 
Sent: Thursday, March 13, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Hi, about OpenStack and VSS. Does anyone have experience with the qemu 
project's implementation of VSS support? They appear to have a 
within-guest agent, qemu-ga, that perhaps can work as a VSS requestor. 
Does it also work with KVM? Does qemu-ga work with libvirt (can VSS 
quiesce be triggered via libvirt)? I think there was an effort for qemu-ga 
to use fsfreeze as an equivalent to VSS on Linux systems, was that done? 
If so, could an OpenStack API provide a generic quiesce request that would 
then get passed to libvirt? (Also, the XenServer VSS support seems 
different than qemu/KVM's, is this true? Can it also be accessed through 
libvirt?

Thanks,

-bruce

-Original Message-
From: Alessandro Pilotti [mailto:apilo...@cloudbasesolutions.com]
Sent: Thursday, March 13, 2014 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Those use cases are very important in enterprise scenarios requirements, 
but there's an important missing piece in the current OpenStack APIs: 
support for application consistent backups via Volume Shadow Copy (or 
other solutions) at the instance level, including differential / 
incremental backups.

VSS can be seamlessly added to the Nova Hyper-V driver (it's included with 
the free Hyper-V Server) with e.g. vSphere and XenServer supporting it as 
well (quescing) and with the option for third party vendors to add drivers 
for their solutions.

A generic Nova backup / restore API supporting those features is quite 
straightforward to design. The main question at this stage is if the 
OpenStack community wants to support those use cases or not. Cinder 
backup/restore support [1] and volume replication [2] are surely a great 
starting point in this direction.

Alessandro

[1] https://review.openstack.org/#/c/69351/
[2] https://review.openstack.org/#/c/64026/


 On 12/mar/2014, at 20:45, Bruce Montague bruce_monta...@symantec.com 
wrote:


 Hi, regarding the call to create a list of disaster recovery (DR) use 
cases ( 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html 
), the following list sketches some speculative OpenStack DR use cases. 
These use cases do not reflect any specific product behavior and span a 
wide spectrum. This list is not a proposal, it is intended primarily to 
solicit additional discussion. The first basic use case, (1), is described 
in a bit more detail than the others; many of the others are elaborations 
on this basic theme.



 * (1) [Single VM]

 A single Windows VM with 4 volumes and VSS (Microsoft's Volume 
Shadowcopy Services) installed runs a key application and integral 
database. VSS can quiesce the app, database, filesystem, and I/O on demand 
and can be invoked external to the guest.

   a. The VM's volumes, including the boot volume, are replicated to a 
remote DR site (another OpenStack deployment).

   b. Some form of replicated VM or VM metadata exists at the remote 
site. This VM/description includes the replicated volumes. Some systems 
might use cold migration or some form of wide-area live VM migration to 
establish this remote site VM/description.

   c. When specified by an 

[openstack-dev] Disaster Recovery for OpenStack - community interest for Juno and beyond - meeting notes and next steps

2014-03-05 Thread Ronen Kat
Thanks you for the participants who joined the kick-off meeting for work 
in the community toward Disaster Recovery for OpenStack.
We captured the meeting notes on the Etherpad - see 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders 


Per the consensus in the meeting we will schedule meeting toward the next 
summit.
Next meeting: March 19 12pm - 1pm ET (phone call-in)
Call in numbers are available at 
https://www.teleconference.att.com/servlet/glbAccess?process=1accessCode=6406941accessNumber=1809417783#C2
 

Passcode: 6406941

Everyone is invited!

Ronen,

- Forwarded by Ronen Kat/Haifa/IBM on 05/03/2014 08:05 PM -

From:   Ronen Kat/Haifa/IBM
To: openstack-dev@lists.openstack.org, 
Date:   04/03/2014 01:16 PM
Subject:Disaster Recovery for OpenStack - call for stakeholders


Hello,

In the Hong-Kong summit, there was a lot of interest around OpenStack 
support for Disaster Recovery including a design summit session, an 
un-conference session and a break-out session.
In addition we set up a Wiki for OpenStack disaster recovery - see 
https://wiki.openstack.org/wiki/DisasterRecovery 
The first step was enabling volume replication in Cinder, which has 
started in the Icehouse development cycle and will continue into Juno.

Toward the Juno summit and development cycle we would like to send out a 
call for disaster recovery stakeholders, looking to:
* Create a list of use-cases and scenarios for disaster recovery with 
OpenStack
* Find interested parties who wish to contribute features and code to 
advance disaster recovery in OpenStack
* Plan needed for discussions at the Juno summit

To coordinate such efforts, I  would like to invite you to a conference 
call on Wednesday March 5 at 12pm ET and work together coordinating 
actions for the Juno summit (an invitation is attached).
We will record minutes of the call at - 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders 
(link also available from the disaster recovery wiki page).
If you are unable to join and interested, please register your self and 
share your thoughts.



Call in numbers are available at 
https://www.teleconference.att.com/servlet/glbAccess?process=1accessCode=6406941accessNumber=1809417783#C2
 

Passcode: 6406941

Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-04 Thread Ronen Kat
Hello,

In the Hong-Kong summit, there was a lot of interest around OpenStack 
support for Disaster Recovery including a design summit session, an 
un-conference session and a break-out session.
In addition we set up a Wiki for OpenStack disaster recovery - see 
https://wiki.openstack.org/wiki/DisasterRecovery 
The first step was enabling volume replication in Cinder, which has 
started in the Icehouse development cycle and will continue into Juno.

Toward the Juno summit and development cycle we would like to send out a 
call for disaster recovery stakeholders, looking to:
* Create a list of use-cases and scenarios for disaster recovery with 
OpenStack
* Find interested parties who wish to contribute features and code to 
advance disaster recovery in OpenStack
* Plan needed for discussions at the Juno summit

To coordinate such efforts, I  would like to invite you to a conference 
call on Wednesday March 5 at 12pm ET and work together coordinating 
actions for the Juno summit (an invitation is attached).
We will record minutes of the call at - 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders 
(link also available from the disaster recovery wiki page).
If you are unable to join and interested, please register your self and 
share your thoughts.



Call in numbers are available at 
https://www.teleconference.att.com/servlet/glbAccess?process=1accessCode=6406941accessNumber=1809417783#C2
 

Passcode: 6406941

Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com


invite.ics
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Towards OpenStack Disaster Recovery

2013-10-21 Thread Ronen Kat
From:   Caitlin Bestler caitlin.best...@nexenta.com
To: openstack-dev@lists.openstack.org,
Date:   21/10/2013 06:55 PM
Subject:Re: [openstack-dev] Towards OpenStack Disaster Recovery


 Hi all,
 We (IBM and Red Hat) have begun discussions on enabling Disaster
Recovery
 (DR) in OpenStack.

 We have created a wiki page with our initial thoughts:
 https://wiki.openstack.org/wiki/DisasterRecovery
 We encourage others to contribute to this wiki.

What wasn't clear to me on first read is what the intended scope is.
Exactly what is being failed over? An entire multi-tenant data-center?
Specific tenants? Or specific enumerated sets of VMs for one tenant?

The exact set could be from a single VM (with its associated resources:
images, volumes, etc) to a set of entities associated with a user.
The data-center itself (including its metadata and configuration) is
consider the equivalent of the hardware - in case of disaster, you
recover what is running, not the infrastructure.

Thanks for pointing out the scope should be emphasized at the top...


Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder Backup documentation - to which doc it should go?

2013-09-03 Thread Ronen Kat


I noticed the complains about code submission without appropriate
documentation submission, so I am ready to do my part for Cinder backup
I have just one little question.
Not being up to date on the current set of OpenStack manuals, and as I
noticed that the block storage admin guide lost a lot of content, to which
document(s) should I add the Cinder backup documentation?

The documentation includes:
1. Backup configuration
2. General description of Cinder backup (commands, features, etc)
3. Description of the available backup drivers

Should all three go to the same place? Or different documents?

Thanks,

Regards,
__
Ronen I. Kat
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-31 Thread Ronen Kat
Hi Murali,

Thanks for answering. I think the issues you raised indeed make sense, and
important ones.

We need to provide backup both for:
1. Volumes
2. VM instances (VM image, VM metadata, and attached volumes)

While the Cinder-backup handles (1), and is a very mature service, it does
not provide (2), and for Cinder-backup it does not make sense to handle (2)
as well.
Backup of VMs (as a package) is beyond the scope of Cinder, which implies
that indeed something beyond Cinder should take this task.
I think this can be done by having Nova orchestrate or assist the backup,
either of volumes or VMs.

I think that from a backup perspective, there is also a need for
consistency groups - the set of entities (volumes) that are considered a
single logical unit and should be backup together.
This logical consistency group could be larger than a VM, but a VM is a
good starting point.

In any case, we should adopt the off-load approach:
1. Handle application consistency issues using Nova as it manages the VMs.
Add to Nova functionality to support live and consistent backup - including
orchestrating volume backup using Cinder
2. Have Cinder do the volume backup, and Cinder then can delegate the
task to the Storage/hypervisor or anyone else who provide a backup driver

While a new project is a neat package that addresses the issues, but does
it worth the work?
OpenStack projects are complex, and successful projects require a lot of
work and long-term maintenance, which is the real pain for open source
projects, as the team tend to be very dynamic.

My two cents is to adopt the nova-networking and nova-volume approach,
try to extend the current work within Nova and Cinder, and if we find out
it does not make sense anymore, explain the issues, and split the work to a
new project.
This way it, if a backup project is indeed needed, you already have the
community to support the effort, and you already have a mature solution.

Regards,
__
Ronen I. Kat
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com




From:   Caitlin Bestler caitlin.best...@nexenta.com
To: Murali Balcha murali.bal...@triliodata.com,
Cc: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Date:   31/08/2013 07:25 AM
Subject:Re: [openstack-dev] Proposal for Raksha, a Data Protection As a
Service project



On 8/30/2013 12:49 PM, Murali Balcha wrote:
 Hi Caitlin,
 Did you get a chance to look at the wiki? It describes the raksha
functionality in detail.
  It includes more than volume backup. It includes vm images, all
  volumes and network configurations associated with vms and it
  supports incremental backups too. Volume backup is essential for
  implementing backup solution but not necessarily sufficient.

Cinder already allows backing volumes up to Swift, and in fact allows
incremental backups.

Any code you write will not back up a vendor's volume more efficiently
than the vendor's code itself can.

The vendor's knowledge of how the data is stored is probably sufficient,
but in this case a vendor has a far more powerful advantage. The vendor
can transfer the volume directly to the Swift server. Your service,
since it running on a compute node rather than the vendor's box, will
first have to fetch the content and *then* send it to Swift.

That's twice as much network traffic. This is not trivial when volumes
are big, which they tend to be.

If this service is implemented, customer who are using vendor backends
such as NexentaStor, Netapp or CEPH will see their performance drop.
That will clearly be unacceptable. New featqures are not allowed to
trash existing performance, especially when they are not actually
providing any new service to customers who already have volume backends
with these features.

You would need to have a proposal to work with the existing Cinder
backend Volume Drivers that in no way removed any option vendors have
currently to optimize performance.

Doing that in a new project, rather than within Cinder, can only make
life harder on the vendors and discourage participation in OpenStack.

I believe all of the features you are looking at can be accomodated by
taskflows using the existing Volume Driver feature (as evolving) in
Cinder.  A new project is not justified, and it will risk creating a
major performance regression for some customers.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-29 Thread Ronen Kat
Hi Murali,

I think the idea to provide enhanced data protection in OpenStack is a
great idea, and I have been thinking about  backup in OpenStack for a while
now.
I just not sure a new project is the only way to do.

(as disclosure, I contributed code to enable IBM TSM as a Cinder backup
driver)

I wonder what is the added-value of a project approach versus enhancements
to the current Nova and Cinder implementations of backup. Let me elaborate.

Nova has a nova backup feature that performs a backup of a VM to Glance,
the backup is managed by tenants in the same way that you propose.
While today it provides only point-in-time full backup, it seems reasonable
that it can be extended support incremental and consistent backup as well -
as the actual work is done either by the Storage or Hypervisor in any case.

Cinder has a cinder backup command that performs a volume backup to Swift,
Ceph or TSM. The Ceph implementation also support incremental backup (Ceph
to Ceph).
I envision that Cinder could be expanded to support incremental backup (for
persistent storage) by adding drivers/plug-ins that will leverage
incremental backup features of either the storage or Hypervisors.
Independently, in Havana the ability to do consistent volume snapshots was
added to GlusterFS. I assume that this consistency support could be
generalized to support other volume drivers, and be utilized as part of a
backup code.

Looking at the key features in Raksha, it seems that the main features
(2,3,4,7) could be addressed by improving the current mechanisms in Nova
and Cinder. I didn't included 1 as a feature as it is more a statement of
intent (or goal) than a feature.
Features 5 (dedup) and 6 (scheduler) are indeed new in your proposal.

Looking at the source de-duplication feature, and taking Swift as an
example, it seems reasonable that if Swift will implement de-duplication,
then doing backup to Swift will give us de-duplication for free.
In fact it would make sense to do the de-duplication at the Swift level
instead of just the backup layer to gain more duplication opportunities.

Following the above, and assuming it all come true (at times I am known to
be an optimistic), then we are left with backup job scheduling, and I
wonder if that is enough for a new project.

My question is, would it make sense to add to the current mechanisms in
Nova and Cinder than add the complexity of a new project?

Thanks,

Regards,
__
Ronen I. Kat
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com

From:   Murali Balcha murali.bal...@triliodata.com
To: openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org,
openst...@list.openstack.org openst...@list.openstack.org,
Date:   29/08/2013 01:18 AM
Subject:[openstack-dev] Proposal for Raksha, a Data Protection As a
Service project



Hello Stackers,
We would like to introduce a new project Raksha, a Data Protection As a
Service (DPaaS) for OpenStack Cloud.
Raksha’s primary goal is to provide a comprehensive Data Protection for
OpenStack by leveraging Nova, Swift, Glance and Cinder. Raksha has
following key features:
  1.   Provide an enterprise grade data protection for OpenStack
  based clouds
  2.   Tenant administered backups and restores
  3.   Application consistent backups
  4.   Point In Time(PiT) full and incremental backups and restores
  5.   Dedupe at source for efficient backups
  6.   A job scheduler for periodic backups
  7.   Noninvasive backup solution that does not require service
  interruption during backup window

You will find the rationale behind the need for Raksha in OpenStack in its
Wiki. The wiki also has the preliminary design and the API description.
Some of the Raksha functionality may overlap with Nova and Cinder projects
and as a community lets work together to coordinate the features among
these projects. We would like to seek out early feedback so we can address
as many issues as we can in the first code drop. We are hoping to enlist
the OpenStack community help in making Raksha a part of OpenStack.
Raksha’s project resources:
Wiki: https://wiki.openstack.org/wiki/Raksha
Launchpad: https://launchpad.net/raksha
Github: https://github.com/DPaaS-Raksha/Raksha (We will upload a prototype
code in few days)
If you want to talk to us, send an email to
openstack-...@lists.launchpad.net with [raksha] in the subject or use
#openstack-raksha irc channel.

Best Regards,
Murali Balcha___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev