Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-26 Thread Deepak C Shetty

On 06/25/2012 11:13 PM, Itamar Heim wrote:

On 06/25/2012 10:14 AM, Deepak C Shetty wrote:

On 06/25/2012 07:47 AM, Shu Ming wrote:

On 2012-6-25 10:10, Andrew Cathrow wrote:


- Original Message -

From: Andy Grover agro...@redhat.com
To: Shu Ming shum...@linux.vnet.ibm.com
Cc: libstoragemgmt-de...@lists.sourceforge.net,
engine-de...@ovirt.org, VDSM Project Development
vdsm-devel@lists.fedorahosted.org
Sent: Sunday, June 24, 2012 10:05:45 PM
Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on
VDSM-libstoragemgmt integration

On 06/24/2012 07:28 AM, Shu Ming wrote:

On 2012-6-23 20:40, Itamar Heim wrote:

On 06/23/2012 03:09 AM, Andy Grover wrote:

On 06/22/2012 04:46 PM, Itamar Heim wrote:

On 06/23/2012 02:31 AM, Andy Grover wrote:

On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:

Also, there is no mention on credentials in any part of the
process.
How does VDSM or the host get access to actually modify the
storage
array? Who holds the creds for that and how? How does the user
set
this up?

It seems to me more natural to have the oVirt-engine use
libstoragemgmt
directly to allocate and export a volume on the storage array,
and
then
pass this info to the vdsm on the node creating the vm. This
answers
Saggi's question about creds -- vdsm never needs array
modification
creds, it only gets handed the params needed to connect and use
the
new
block device (ip, iqn, chap, lun).

Is this usage model made difficult or impossible by the current
software
architecture?

what about live snapshots?

I'm not a virt guy, so extreme handwaving:

vm X uses luns 1 2

engine - vdsm pause vm X

that's pausing the VM. live snapshot isn't supposed to do so.

Tough we don't expect to do a pausing operation to the VM when live
snaphot is undergoing, the VM should be blocked on the access to
specific luns for a while. The blocking time should be very short
to
avoid the storage IO time out in the VM.

OK my mistake, we don't pause the VM during live snapshot, we block
on
access to the luns while snapshotting. Does this keep live snapshots
working and mean ovirt-engine can use libsm to config the storage
array
instead of vdsm?

Because that was really my main question, should we be talking about
engine-libstoragemgmt integration rather than vdsm-libstoragemgmt
integration.

for snapshotting wouldn't we want VDSM to handle the coordination of
the various atomic functions?


I think VDSM-libstoragemgmt will let the storage array itself to make
the snapshot and handle the coordination of the various atomic
functions. VDSM should be blocked on the following access to the
specific luns which are under snapshotting.


I kind of agree. If snapshot is being done at the array level, then the
array takes care of quiesing the I/O, taking the snapshot and allowing
the I/O, why does VDSM have to worry about anything here, it should all
happen transparently for VDSM, isnt it ?


I may be misssing something, but afaiu you need to ask the guest to 
perform the quiesce, and i'm sure the storage array can't do that.


No, you are not, I missed it. After Tony  Shu Ming's reply, I realised 
that the guest has to quiese the I/O before VDSM can ask storage array 
to take the snapshot.





___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-25 Thread Deepak C Shetty

On 06/19/2012 01:45 AM, Saggi Mizrahi wrote:

First of all I'd like to suggest not using the LSM acronym as it can also mean 
live-storage-migration and maybe other things.


Sure, what do you suggest ? libSM ?


Secondly I would like to avoid talking about what needs to be changed in VDSM 
before we figure out what exactly we want to accomplish.



Also, there is no mention on credentials in any part of the process.
How does VDSM or the host get access to actually modify the storage array?
Who holds the creds for that and how?
How does the user set this up?


Per my original discussion on this with Ayal, this is what he had 
suggested...
In addition, I'm assuming we will either need a new 'storage array' 
entity in engine to keep credentials, or, in case of storage array as 
storage domain, just keep this info as part of the domain at engine level.


Either we can have the libstoragemgmt cred stored in the engine as part 
of engine-setup or have the user input them as part of Storage Prov and 
user clicks on remember cred button, so engine saves it and passes it 
to VDSM as needed ? In any way, the cred should come from the 
user/admin, no other way correct ?



In the array as domain case. How are the luns being mapped to initiators. What 
about setting discovery credentials.
In the array set up case. How will the hosts be represented in regards to 
credentials?
How will the different schemes and capabilities in regard to authentication 
methods will be expressed.


Not clear on what the concern here is. Can you pls provide more clarity 
on the problem here ?

Maybe providing some examples will help.


Rest of the comments inline

- Original Message -

From: Deepak C Shettydeepa...@linux.vnet.ibm.com
To: VDSM Project Developmentvdsm-devel@lists.fedorahosted.org
Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-de...@ovirt.org
Sent: Wednesday, May 30, 2012 5:38:46 AM
Subject: [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

Hello All,

  I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and
crystallize it, before putting it on the ovirt wiki.
I have run this once thru Ayal and Tony, so have some of their
comments
incorporated.

I still have few doubts/questions, which I have posted below with
lines
ending with '?'

Comments / Suggestions are welcome  appreciated.

thanx,
deepak

[Ccing engine-devel and libstoragemgmt lists as this stuff is
relevant
to them too]

--

1) Background:

VDSM provides high level API for node virtualization management. It
acts
in response to the requests sent by oVirt Engine, which uses VDSM to
do
all node virtualization related tasks, including but not limited to
storage management.

libstoragemgmt aims to provide vendor agnostic API for managing
external
storage array. It should help system administrators utilizing open
source solutions have a way to programmatically manage their storage
hardware in a vendor neutral way. It also aims to facilitate
management
automation, ease of use and take advantage of storage vendor
supported
features which improve storage performance and space utilization.

Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/

libstoragemgmt (LSM) today supports C and python plugins for talking
to
external storage array using SMI-S as well as native interfaces (eg:
netapp plugin )
Plan is to grow the SMI-S interface as needed over time and add more
vendor specific plugins for exploiting features not possible via
SMI-S
or have better alternatives than using SMI-S.
For eg: Many of the copy offload features require to use vendor
specific
commands, which justifies the need for a vendor specific plugin.


2) Goals:

  2a) Ability to plugin external storage array into oVirt/VDSM
virtualization stack, in a vendor neutral way.

  2b) Ability to list features/capabilities and other statistical
info of the array

  2c) Ability to utilize the storage array offload capabilities
  from
oVirt/VDSM.


3) Details:

LSM will sit as a new repository engine in VDSM.
VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192

Current plan is to have LSM co-exist with VDSM on the virtualization
nodes.

*Note : 'storage' used below is generic. It can be a file/nfs-export
for
NAS targets and LUN/logical-drive for SAN targets.

VDSM can use LSM and do the following...
  - Provision storage
  - Consume storage

3.1) Provisioning Storage using LSM

Typically this will be done by a Storage administrator.

oVirt/VDSM should provide storage admin the
  - ability to list the different storage arrays along with their
types (NAS/SAN), capabilities, free/used space.
  - ability to provision storage using any of the array
  capabilities
(eg: thin provisioned lun or new NFS export )
  - ability to manage the provisioned storage

Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-25 Thread Deepak C Shetty

On 06/25/2012 08:28 PM, Ryan Harper wrote:

* Andrew Cathrowacath...@redhat.com  [2012-06-24 21:11]:


- Original Message -

From: Andy Groveragro...@redhat.com
To: Shu Mingshum...@linux.vnet.ibm.com
Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-de...@ovirt.org, VDSM 
Project Development
vdsm-devel@lists.fedorahosted.org
Sent: Sunday, June 24, 2012 10:05:45 PM
Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt  
integration

On 06/24/2012 07:28 AM, Shu Ming wrote:

On 2012-6-23 20:40, Itamar Heim wrote:

On 06/23/2012 03:09 AM, Andy Grover wrote:

On 06/22/2012 04:46 PM, Itamar Heim wrote:

On 06/23/2012 02:31 AM, Andy Grover wrote:

On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:

Also, there is no mention on credentials in any part of the
process.
How does VDSM or the host get access to actually modify the
storage
array? Who holds the creds for that and how? How does the user
set
this up?

It seems to me more natural to have the oVirt-engine use
libstoragemgmt
directly to allocate and export a volume on the storage array,
and
then
pass this info to the vdsm on the node creating the vm. This
answers
Saggi's question about creds -- vdsm never needs array
modification
creds, it only gets handed the params needed to connect and use
the
new
block device (ip, iqn, chap, lun).

Is this usage model made difficult or impossible by the current
software
architecture?

what about live snapshots?

I'm not a virt guy, so extreme handwaving:

vm X uses luns 1   2

engine -   vdsm pause vm X

that's pausing the VM. live snapshot isn't supposed to do so.

Tough we don't expect to do a pausing operation to the VM when live
snaphot is undergoing, the VM should be blocked on the access to
specific luns for a while.  The blocking time should be very short
to
avoid the storage IO time out in the VM.

OK my mistake, we don't pause the VM during live snapshot, we block
on
access to the luns while snapshotting. Does this keep live snapshots
working and mean ovirt-engine can use libsm to config the storage
array
instead of vdsm?

Because that was really my main question, should we be talking about
engine-libstoragemgmt integration rather than vdsm-libstoragemgmt
integration.

for snapshotting wouldn't we want VDSM to handle the coordination of
the various atomic functions?

Absolutely.  Requiring every management application (engine, etc) to
integrate with libstoragemanagement is a win here.  We want to simplify
working with KVM, storage, etc not require every mgmt application to
know deep details about how to create a live VM snapshot.



Sorry, but not clear to me. Are you saying engine-libstoragemgmt 
integration is a win here ?
VDSM is the common factor here.. so integrating libstoragemgmt with VDSM 
helps anybody talkign with VDSM in future AFAIU.


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-25 Thread Ryan Harper
* Deepak C Shetty deepa...@linux.vnet.ibm.com [2012-06-25 10:14]:
 On 06/25/2012 08:28 PM, Ryan Harper wrote:
 * Andrew Cathrowacath...@redhat.com  [2012-06-24 21:11]:
 
 - Original Message -
 From: Andy Groveragro...@redhat.com
 To: Shu Mingshum...@linux.vnet.ibm.com
 Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-de...@ovirt.org, 
 VDSM Project Development
 vdsm-devel@lists.fedorahosted.org
 Sent: Sunday, June 24, 2012 10:05:45 PM
 Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt 
 integration
 
 On 06/24/2012 07:28 AM, Shu Ming wrote:
 On 2012-6-23 20:40, Itamar Heim wrote:
 On 06/23/2012 03:09 AM, Andy Grover wrote:
 On 06/22/2012 04:46 PM, Itamar Heim wrote:
 On 06/23/2012 02:31 AM, Andy Grover wrote:
 On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
 Also, there is no mention on credentials in any part of the
 process.
 How does VDSM or the host get access to actually modify the
 storage
 array? Who holds the creds for that and how? How does the user
 set
 this up?
 It seems to me more natural to have the oVirt-engine use
 libstoragemgmt
 directly to allocate and export a volume on the storage array,
 and
 then
 pass this info to the vdsm on the node creating the vm. This
 answers
 Saggi's question about creds -- vdsm never needs array
 modification
 creds, it only gets handed the params needed to connect and use
 the
 new
 block device (ip, iqn, chap, lun).
 
 Is this usage model made difficult or impossible by the current
 software
 architecture?
 what about live snapshots?
 I'm not a virt guy, so extreme handwaving:
 
 vm X uses luns 1   2
 
 engine -   vdsm pause vm X
 that's pausing the VM. live snapshot isn't supposed to do so.
 Tough we don't expect to do a pausing operation to the VM when live
 snaphot is undergoing, the VM should be blocked on the access to
 specific luns for a while.  The blocking time should be very short
 to
 avoid the storage IO time out in the VM.
 OK my mistake, we don't pause the VM during live snapshot, we block
 on
 access to the luns while snapshotting. Does this keep live snapshots
 working and mean ovirt-engine can use libsm to config the storage
 array
 instead of vdsm?
 
 Because that was really my main question, should we be talking about
 engine-libstoragemgmt integration rather than vdsm-libstoragemgmt
 integration.
 for snapshotting wouldn't we want VDSM to handle the coordination of
 the various atomic functions?
 Absolutely.  Requiring every management application (engine, etc) to
 integrate with libstoragemanagement is a win here.  We want to simplify
 working with KVM, storage, etc not require every mgmt application to
 know deep details about how to create a live VM snapshot.
 
 
 Sorry, but not clear to me. Are you saying engine-libstoragemgmt
 integration is a win here ?

Sorry if I wasn't clear.  To answer your question: No. 

The mgmt app should *NOT* have to learn all of the ins and outs of the
end-point storage and the management of it.

 VDSM is the common factor here.. so integrating libstoragemgmt with
 VDSM helps anybody talkign with VDSM in future AFAIU.

Yes.  100% agree.


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ry...@us.ibm.com

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-25 Thread Andy Grover
On 06/25/2012 08:17 AM, Ryan Harper wrote:
 * Deepak C Shetty deepa...@linux.vnet.ibm.com [2012-06-25 10:14]:
 On 06/25/2012 08:28 PM, Ryan Harper wrote:
 The mgmt app should *NOT* have to learn all of the ins and outs of the
 end-point storage and the management of it.
 
 VDSM is the common factor here.. so integrating libstoragemgmt with
 VDSM helps anybody talkign with VDSM in future AFAIU.
 
 Yes.  100% agree.

Thanks, this has helped me understand vdsm's role much better.

-- Andy
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-25 Thread Itamar Heim

On 06/25/2012 10:14 AM, Deepak C Shetty wrote:

On 06/25/2012 07:47 AM, Shu Ming wrote:

On 2012-6-25 10:10, Andrew Cathrow wrote:


- Original Message -

From: Andy Grover agro...@redhat.com
To: Shu Ming shum...@linux.vnet.ibm.com
Cc: libstoragemgmt-de...@lists.sourceforge.net,
engine-de...@ovirt.org, VDSM Project Development
vdsm-devel@lists.fedorahosted.org
Sent: Sunday, June 24, 2012 10:05:45 PM
Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on
VDSM-libstoragemgmt integration

On 06/24/2012 07:28 AM, Shu Ming wrote:

On 2012-6-23 20:40, Itamar Heim wrote:

On 06/23/2012 03:09 AM, Andy Grover wrote:

On 06/22/2012 04:46 PM, Itamar Heim wrote:

On 06/23/2012 02:31 AM, Andy Grover wrote:

On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:

Also, there is no mention on credentials in any part of the
process.
How does VDSM or the host get access to actually modify the
storage
array? Who holds the creds for that and how? How does the user
set
this up?

It seems to me more natural to have the oVirt-engine use
libstoragemgmt
directly to allocate and export a volume on the storage array,
and
then
pass this info to the vdsm on the node creating the vm. This
answers
Saggi's question about creds -- vdsm never needs array
modification
creds, it only gets handed the params needed to connect and use
the
new
block device (ip, iqn, chap, lun).

Is this usage model made difficult or impossible by the current
software
architecture?

what about live snapshots?

I'm not a virt guy, so extreme handwaving:

vm X uses luns 1 2

engine - vdsm pause vm X

that's pausing the VM. live snapshot isn't supposed to do so.

Tough we don't expect to do a pausing operation to the VM when live
snaphot is undergoing, the VM should be blocked on the access to
specific luns for a while. The blocking time should be very short
to
avoid the storage IO time out in the VM.

OK my mistake, we don't pause the VM during live snapshot, we block
on
access to the luns while snapshotting. Does this keep live snapshots
working and mean ovirt-engine can use libsm to config the storage
array
instead of vdsm?

Because that was really my main question, should we be talking about
engine-libstoragemgmt integration rather than vdsm-libstoragemgmt
integration.

for snapshotting wouldn't we want VDSM to handle the coordination of
the various atomic functions?


I think VDSM-libstoragemgmt will let the storage array itself to make
the snapshot and handle the coordination of the various atomic
functions. VDSM should be blocked on the following access to the
specific luns which are under snapshotting.


I kind of agree. If snapshot is being done at the array level, then the
array takes care of quiesing the I/O, taking the snapshot and allowing
the I/O, why does VDSM have to worry about anything here, it should all
happen transparently for VDSM, isnt it ?


I may be misssing something, but afaiu you need to ask the guest to 
perform the quiesce, and i'm sure the storage array can't do that.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-24 Thread Shu Ming

On 2012-6-23 20:40, Itamar Heim wrote:

On 06/23/2012 03:09 AM, Andy Grover wrote:

On 06/22/2012 04:46 PM, Itamar Heim wrote:

On 06/23/2012 02:31 AM, Andy Grover wrote:

On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:

Also, there is no mention on credentials in any part of the process.
How does VDSM or the host get access to actually modify the storage
array? Who holds the creds for that and how? How does the user set
this up?


It seems to me more natural to have the oVirt-engine use 
libstoragemgmt
directly to allocate and export a volume on the storage array, and 
then

pass this info to the vdsm on the node creating the vm. This answers
Saggi's question about creds -- vdsm never needs array modification
creds, it only gets handed the params needed to connect and use the 
new

block device (ip, iqn, chap, lun).

Is this usage model made difficult or impossible by the current 
software

architecture?


what about live snapshots?


I'm not a virt guy, so extreme handwaving:

vm X uses luns 1  2

engine -  vdsm pause vm X


that's pausing the VM. live snapshot isn't supposed to do so.


Tough we don't expect to do a pausing operation to the VM when live 
snaphot is undergoing, the VM should be blocked on the access to 
specific luns for a while.  The blocking time should be very short to 
avoid the storage IO time out in the VM.





engine -  libstoragemgmt snapshot luns 1, 2 to luns 3, 4
engine -  vdsm snapshot running state of X to Y
engine -  vdsm unpause vm X


if engine had any failure before this step, the VM will remain paused. 
i.e., we compromised the VM to take a live snapshot.



engine -  vdsm change Y to use luns 3, 4

?

-- Andy


___
Engine-devel mailing list
engine-de...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel




--
Shu Ming shum...@linux.vnet.ibm.com
IBM China Systems and Technology Laboratory


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-24 Thread Andrew Cathrow


- Original Message -
 From: Andy Grover agro...@redhat.com
 To: Shu Ming shum...@linux.vnet.ibm.com
 Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-de...@ovirt.org, VDSM 
 Project Development
 vdsm-devel@lists.fedorahosted.org
 Sent: Sunday, June 24, 2012 10:05:45 PM
 Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt
 integration
 
 On 06/24/2012 07:28 AM, Shu Ming wrote:
  On 2012-6-23 20:40, Itamar Heim wrote:
  On 06/23/2012 03:09 AM, Andy Grover wrote:
  On 06/22/2012 04:46 PM, Itamar Heim wrote:
  On 06/23/2012 02:31 AM, Andy Grover wrote:
  On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
  Also, there is no mention on credentials in any part of the
  process.
  How does VDSM or the host get access to actually modify the
  storage
  array? Who holds the creds for that and how? How does the user
  set
  this up?
 
  It seems to me more natural to have the oVirt-engine use
  libstoragemgmt
  directly to allocate and export a volume on the storage array,
  and
  then
  pass this info to the vdsm on the node creating the vm. This
  answers
  Saggi's question about creds -- vdsm never needs array
  modification
  creds, it only gets handed the params needed to connect and use
  the
  new
  block device (ip, iqn, chap, lun).
 
  Is this usage model made difficult or impossible by the current
  software
  architecture?
 
  what about live snapshots?
 
  I'm not a virt guy, so extreme handwaving:
 
  vm X uses luns 1  2
 
  engine -  vdsm pause vm X
 
  that's pausing the VM. live snapshot isn't supposed to do so.
  
  Tough we don't expect to do a pausing operation to the VM when live
  snaphot is undergoing, the VM should be blocked on the access to
  specific luns for a while.  The blocking time should be very short
  to
  avoid the storage IO time out in the VM.
 
 OK my mistake, we don't pause the VM during live snapshot, we block
 on
 access to the luns while snapshotting. Does this keep live snapshots
 working and mean ovirt-engine can use libsm to config the storage
 array
 instead of vdsm?
 
 Because that was really my main question, should we be talking about
 engine-libstoragemgmt integration rather than vdsm-libstoragemgmt
 integration.

for snapshotting wouldn't we want VDSM to handle the coordination of the 
various atomic functions?
 
 Thanks -- Regards -- Andy
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-23 Thread Itamar Heim

On 06/23/2012 03:09 AM, Andy Grover wrote:

On 06/22/2012 04:46 PM, Itamar Heim wrote:

On 06/23/2012 02:31 AM, Andy Grover wrote:

On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:

Also, there is no mention on credentials in any part of the process.
How does VDSM or the host get access to actually modify the storage
array? Who holds the creds for that and how? How does the user set
this up?


It seems to me more natural to have the oVirt-engine use libstoragemgmt
directly to allocate and export a volume on the storage array, and then
pass this info to the vdsm on the node creating the vm. This answers
Saggi's question about creds -- vdsm never needs array modification
creds, it only gets handed the params needed to connect and use the new
block device (ip, iqn, chap, lun).

Is this usage model made difficult or impossible by the current software
architecture?


what about live snapshots?


I'm not a virt guy, so extreme handwaving:

vm X uses luns 1  2

engine -  vdsm pause vm X


that's pausing the VM. live snapshot isn't supposed to do so.


engine -  libstoragemgmt snapshot luns 1, 2 to luns 3, 4
engine -  vdsm snapshot running state of X to Y
engine -  vdsm unpause vm X


if engine had any failure before this step, the VM will remain paused. 
i.e., we compromised the VM to take a live snapshot.



engine -  vdsm change Y to use luns 3, 4

?

-- Andy


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-22 Thread Andy Grover
On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
 Also, there is no mention on credentials in any part of the process. 
 How does VDSM or the host get access to actually modify the storage
 array? Who holds the creds for that and how? How does the user set
 this up?

It seems to me more natural to have the oVirt-engine use libstoragemgmt
directly to allocate and export a volume on the storage array, and then
pass this info to the vdsm on the node creating the vm. This answers
Saggi's question about creds -- vdsm never needs array modification
creds, it only gets handed the params needed to connect and use the new
block device (ip, iqn, chap, lun).

Is this usage model made difficult or impossible by the current software
architecture?

Thanks -- Regards -- Andy
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-22 Thread Itamar Heim

On 06/23/2012 02:31 AM, Andy Grover wrote:

On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:

Also, there is no mention on credentials in any part of the process.
How does VDSM or the host get access to actually modify the storage
array? Who holds the creds for that and how? How does the user set
this up?


It seems to me more natural to have the oVirt-engine use libstoragemgmt
directly to allocate and export a volume on the storage array, and then
pass this info to the vdsm on the node creating the vm. This answers
Saggi's question about creds -- vdsm never needs array modification
creds, it only gets handed the params needed to connect and use the new
block device (ip, iqn, chap, lun).

Is this usage model made difficult or impossible by the current software
architecture?


what about live snapshots?
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-22 Thread Andy Grover
On 06/22/2012 04:46 PM, Itamar Heim wrote:
 On 06/23/2012 02:31 AM, Andy Grover wrote:
 On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
 Also, there is no mention on credentials in any part of the process.
 How does VDSM or the host get access to actually modify the storage
 array? Who holds the creds for that and how? How does the user set
 this up?

 It seems to me more natural to have the oVirt-engine use libstoragemgmt
 directly to allocate and export a volume on the storage array, and then
 pass this info to the vdsm on the node creating the vm. This answers
 Saggi's question about creds -- vdsm never needs array modification
 creds, it only gets handed the params needed to connect and use the new
 block device (ip, iqn, chap, lun).

 Is this usage model made difficult or impossible by the current software
 architecture?
 
 what about live snapshots?

I'm not a virt guy, so extreme handwaving:

vm X uses luns 1  2

engine - vdsm pause vm X
engine - libstoragemgmt snapshot luns 1, 2 to luns 3, 4
engine - vdsm snapshot running state of X to Y
engine - vdsm unpause vm X
engine - vdsm change Y to use luns 3, 4

?

-- Andy
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration

2012-06-18 Thread Saggi Mizrahi
First of all I'd like to suggest not using the LSM acronym as it can also mean 
live-storage-migration and maybe other things.

Secondly I would like to avoid talking about what needs to be changed in VDSM 
before we figure out what exactly we want to accomplish.

Also, there is no mention on credentials in any part of the process.
How does VDSM or the host get access to actually modify the storage array?
Who holds the creds for that and how?
How does the user set this up?

In the array as domain case. How are the luns being mapped to initiators. What 
about setting discovery credentials.
In the array set up case. How will the hosts be represented in regards to 
credentials?
How will the different schemes and capabilities in regard to authentication 
methods will be expressed.

Rest of the comments inline

- Original Message -
 From: Deepak C Shetty deepa...@linux.vnet.ibm.com
 To: VDSM Project Development vdsm-devel@lists.fedorahosted.org
 Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-de...@ovirt.org
 Sent: Wednesday, May 30, 2012 5:38:46 AM
 Subject: [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
 
 Hello All,
 
  I have a draft write-up on the VDSM-libstoragemgmt integration.
 I wanted to run this thru' the mailing list(s) to help tune and
 crystallize it, before putting it on the ovirt wiki.
 I have run this once thru Ayal and Tony, so have some of their
 comments
 incorporated.
 
 I still have few doubts/questions, which I have posted below with
 lines
 ending with '?'
 
 Comments / Suggestions are welcome  appreciated.
 
 thanx,
 deepak
 
 [Ccing engine-devel and libstoragemgmt lists as this stuff is
 relevant
 to them too]
 
 --
 
 1) Background:
 
 VDSM provides high level API for node virtualization management. It
 acts
 in response to the requests sent by oVirt Engine, which uses VDSM to
 do
 all node virtualization related tasks, including but not limited to
 storage management.
 
 libstoragemgmt aims to provide vendor agnostic API for managing
 external
 storage array. It should help system administrators utilizing open
 source solutions have a way to programmatically manage their storage
 hardware in a vendor neutral way. It also aims to facilitate
 management
 automation, ease of use and take advantage of storage vendor
 supported
 features which improve storage performance and space utilization.
 
 Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
 
 libstoragemgmt (LSM) today supports C and python plugins for talking
 to
 external storage array using SMI-S as well as native interfaces (eg:
 netapp plugin )
 Plan is to grow the SMI-S interface as needed over time and add more
 vendor specific plugins for exploiting features not possible via
 SMI-S
 or have better alternatives than using SMI-S.
 For eg: Many of the copy offload features require to use vendor
 specific
 commands, which justifies the need for a vendor specific plugin.
 
 
 2) Goals:
 
  2a) Ability to plugin external storage array into oVirt/VDSM
 virtualization stack, in a vendor neutral way.
 
  2b) Ability to list features/capabilities and other statistical
 info of the array
 
  2c) Ability to utilize the storage array offload capabilities
  from
 oVirt/VDSM.
 
 
 3) Details:
 
 LSM will sit as a new repository engine in VDSM.
 VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
 
 Current plan is to have LSM co-exist with VDSM on the virtualization
 nodes.
 
 *Note : 'storage' used below is generic. It can be a file/nfs-export
 for
 NAS targets and LUN/logical-drive for SAN targets.
 
 VDSM can use LSM and do the following...
  - Provision storage
  - Consume storage
 
 3.1) Provisioning Storage using LSM
 
 Typically this will be done by a Storage administrator.
 
 oVirt/VDSM should provide storage admin the
  - ability to list the different storage arrays along with their
 types (NAS/SAN), capabilities, free/used space.
  - ability to provision storage using any of the array
  capabilities
 (eg: thin provisioned lun or new NFS export )
  - ability to manage the provisioned storage (eg: resize/delete
  storage)
 
 Once the storage is provisioned by the storage admin, VDSM will have
 to
 refresh the host(s) for them to be able to see the newly provisioned
 storage.
[SM] What about the clustered case, The management or the mailbox will have to 
be involved. Pros\Cons? Is there a capability for the storage to announce a 
change in topology? Can libstoragemgmt consume it? Does it even make sense?
 
 3.1.1) Potential flows:
 
 Mgmt - vdsm - lsm: create LUN + LUN Mapping / Zoning / whatever is
 needed to make LUN available to list of hosts passed by mgmt
 Mgmt - vdsm: getDeviceList (refreshes host and gets list of devices)
   Repeat above for all relevant hosts (depending on list passed
   earlier,
 mostly relevant