Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-11-14 Thread Mitsuhiro Tanino
Hello, Duncan, Mike,

11:13 (DuncanT) DuncanT mtanino, You can and should submit the code even if
the spec isn't approved as long as it isn't looking contentious, but I will 
certainly take a look

Base on the comment at cinder__unofficial_meeting at Wednesday,
I posted both updated cinder-spec and the code.
Could you review the spec and code?

 Spec:  https://review.openstack.org/#/c/129352/
 Code:  https://review.openstack.org/#/c/92479/

The code is still work in progress, but most of functions are
already implemented. Please check whether the code doesn't break
existing cinder code.

For your reference, here are whole links related to this proposal.

 Blueprints:
  * https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage
  * https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage
 Spec:
  * nova:   https://review.openstack.org/#/c/127318/
  * cinder: https://review.openstack.org/#/c/129352/
 Gerrit Review:
  * nova:   https://review.openstack.org/#/c/92443/
  * cinder: https://review.openstack.org/#/c/92479/

Regards,
Mitsuhiro Tanino mitsuhiro.tan...@hds.com
HITACHI DATA SYSTEMS
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-06-02 Thread Mitsuhiro Tanino
Hi Deepak-san,

Thank you for your comment. Please see following comments.

1) There is a lof of manual work needed here.. like every time the new host 
added.. admin needs to do FC zoning to ensure that LU is visible by the host.

Right. Compared to LVMiSCSI driver, proposed driver are needed some manual 
admin works.

Also the method you mentioend for refreshing (echo '---'  ...) doesn't work 
reliably across all storage types does it ?

The echo command is already used in rescan_hosts() at linuxfc.py before 
connecting new volume to a instance.
As you mentioned,  whether this command works properly or not depends on 
storage types.
Therefore, admin needs to confirm the command working properly.

2) In Slide 1-1 .. how ( and who?) ensures that the compute nodes don't step 
on each other is using the LVs ? In other words.. how is it ensured that LV1 
is not used by compute nodes 1 and 2 at the same time ?

In my understanding, Nova can't assign single cinder volume(ex. VOL1) to 
multiple instances.
After attaching the VOL1 to an instance, a status of the VOL1 is changed to 
in-use and
user can’t attach the VOL1 to other instances.

3) In slide 1-2, you show that the LU1 is seen as /dev/sdx on all the nodes.. 
this is wrong.. it can be seen as anything (/dev/sdx on control node, sdn on 
compute 1, sdz on compute 2) so assumign sdx on all nodes is wrong. How does 
this different device names handled.. in short, how does compute node 2 knows 
that LU1 is actually sdn and not sdz (assuming you had  1 LUs provisioned)

Right. Same device name may not be assigned on all nodes.
At my proposed driver, admin needs to create PV and VG manually.
Therefore, all nodes do not have to recognize the LU1 as /dev/sdx.

4) What abt multipath ? In most prod env.. the FC storage will be 
multipath'ed.. hence you will actually see sdx and sdy on each node and you 
actually need to use mpathN (which is multipathe'd to sdx anx sdy) device and 
NOT the sd? device to take adv of the customer multipath env. How does the 
nodes know which mpath? device to use and which mpath? device maps to which 
LU on the array ?

As I mentioned above, admin creates PV and VG manually at my proposed driver. 
If a product environment
uses multipath, admin can create a PV and VG on top of mpath device, using 
pvcreate /dev/mpath/mpathX.

5) Doesnt this new proposal also causes the compute nodes to be physcially 
connected (via FC) to the array, which means more wiring and need for FC HBA 
on compute nodes. With LVMiSCSI, we don't need FC HBA on compute nodes so you 
are actualluy adding cost of each FC HBA to the compute nodes and slowly 
turning commodity system to non-commodity ;-) (in a way)

I think this is depends on a customer's or cloud provider's requirement.(In 
slide P9)
If the requirement is low cost and none FC cloud environment, LVMiSCSI is 
appropriate driver.
If better I/O performance is required, proposed driver or vendor cinder storage 
driver with FC
are appropriate, because these drivers can directly issue I/O to volumes via FC.

6) Last but not the least... since you are using 1 BIG LU on the array to 
host multiple volumes, you cannot possibly take adv of the premium, efficient 
snapshot/clone/mirroring features of the array, since they are at LU level, 
not at the LV level. LV snapshots have limitations (as mentioned by you in 
other thread) and are always in-efficient compared to array snapshots. Why 
would someone want to use less efficient method when they invested on a 
expensive array ?

Right. If user uses array volume directly, user can take adv of efficient 
snapshot/clone/mirroring features.

As I wrote in a reply e-mail to Avishay-san, in OpenStack cloud environment, 
workloads of
storages have been increasing and it is difficult to manage the workloads 
because every user
have a  permission to execute storage operations via cinder.
In order to use expensive array more efficient, I think it is better to reduce 
hardware based storage
workload by offloading the workload to software based volume operation on a 
case by case.

If we have two drivers in regards to a storage, we can provide volume both way 
as the situation demands.
Ex.
  As for Standard type storage, use proposed software based LVM cinder driver.
  As for High performance type storage, use hardware based cinder driver.(Ex. 
Higher charge than Standard volume)

This is one of use-case of my proposed driver.

Regards,
Mitsuhiro Tanino mitsuhiro.tan...@hds.com
 HITACHI DATA SYSTEMS
 c/o Red Hat, 314 Littleton Road, Westford, MA 01886

From: Deepak Shetty [mailto:dpkshe...@gmail.com]
Sent: Wednesday, May 28, 2014 3:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Support LVM on a shared LU

Mitsuhiro,
  Few questions that come to my mind based on your proposal

1) There is a lof of manual work needed here.. like every time the new host 
added.. admin needs to do FC zoning to ensure that LU

Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-28 Thread Deepak Shetty
Mitsuhiro,
  Few questions that come to my mind based on your proposal

1) There is a lof of manual work needed here.. like every time the new host
added.. admin needs to do FC zoning to ensure that LU is visible by the
host. Also the method you mentioend for refreshing (echo '---'  ...)
doesn't work reliably across all storage types does it ?

2) In Slide 1-1 .. how ( and who?) ensures that the compute nodes don't
step on each other is using the LVs ? In other words.. how is it ensured
that LV1 is not used by compute nodes 1 and 2 at the same time ?

3) In slide 1-2, you show that the LU1 is seen as /dev/sdx on all the
nodes.. this is wrong.. it can be seen as anything (/dev/sdx on control
node, sdn on compute 1, sdz on compute 2) so assumign sdx on all nodes is
wrong.
How does this different device names handled.. in short, how does compute
node 2 knows that LU1 is actually sdn and not sdz (assuming you had  1 LUs
provisioned)

4) What abt multipath ? In most prod env.. the FC storage will be
multipath'ed.. hence you will actually see sdx and sdy on each node and you
actually need to use mpathN (which is multipathe'd to sdx anx sdy) device
and NOT the sd? device to take adv of the customer multipath env. How does
the nodes know which mpath? device to use and which mpath? device maps to
which LU on the array ?

5) Doesnt this new proposal also causes the compute nodes to be physcially
connected (via FC) to the array, which means more wiring and need for FC
HBA on compute nodes. With LVMiSCSI, we don't need FC HBA on compute nodes
so you are actualluy adding cost of each FC HBA to the compute nodes and
slowly turning commodity system to non-commodity ;-) (in a way)

6) Last but not the least... since you are using 1 BIG LU on the array to
host multiple volumes, you cannot possibly take adv of the premium,
efficient snapshot/clone/mirroring features of the array, since they are at
LU level, not at the LV level. LV snapshots have limitations (as mentioned
by you in other thread) and are always in-efficient compared to array
snapshots. Why would someone want to use less efficient method when they
invested on a expensive array ?

thanx,
deepak



On Tue, May 20, 2014 at 9:01 PM, Mitsuhiro Tanino
mitsuhiro.tan...@hds.comwrote:

  Hello All,



 I’m proposing a feature of LVM driver to support LVM on a shared LU.

 The proposed LVM volume driver provides these benefits.
   - Reduce hardware based storage workload by offloading the workload to
 software based volume operation.
   - Provide quicker volume creation and snapshot creation without storage
 workloads.
   - Enable cinder to any kinds of shared storage volumes without specific
 cinder storage driver.

   - Better I/O performance using direct volume access via Fibre channel.



 In the attachment pdf, following contents are explained.

   1. Detail of Proposed LVM volume driver

   1-1. Big Picture

   1-2. Administrator preparation

   1-3. Work flow of volume creation and attachment

   2. Target of Proposed LVM volume driver

   3. Comparison of Proposed LVM volume driver



 Could you review the attachment?

 Any comments, questions, additional ideas would be appreciated.





 Also there are blueprints, wiki and patches related to the slide.

 https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage

 https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage


 https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder

 https://review.openstack.org/#/c/92479/

 https://review.openstack.org/#/c/92443/



 Regards,

 Mitsuhiro Tanino mitsuhiro.tan...@hds.com

  *HITACHI DATA SYSTEMS*

  c/o Red Hat, 314 Littleton Road, Westford, MA 01886

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-25 Thread Avishay Traeger
Hello Mitsuhiro,
I'm sorry, but I remain unconvinced.  Is there a customer demand for this
feature?
If you'd like, feel free to add this topic to a Cinder weekly meeting
agenda, and join the meeting so that we can have an interactive discussion.
https://wiki.openstack.org/wiki/CinderMeetings

Thanks,
Avishay


On Sat, May 24, 2014 at 12:31 AM, Mitsuhiro Tanino mitsuhiro.tan...@hds.com
 wrote:

  Hi Avishay-san,



 Thank you for your review and comments for my proposal. I commented
 in-line.



 So the way I see it, the value here is a generic driver that can work
 with any storage.  The downsides:



 A generic ­driver for any storage is an one of benefit.

 But main benefit of proposed driver is as follows.

 - Reduce hardware based storage workload by offloading the workload to
 software based volume operation.



 Conventionally, operations to an enterprise storage such as volume
 creation, deletion, snapshot, etc

 are only permitted system administrator and they handle these operations
 after carefully examining.

 In OpenStack cloud environment, every user have a permission to execute
 these storage operations

 via cinder. As a result, workloads of storages have been increasing and it
 is difficult to manage

 the workloads.



 If we have two drivers in regards to a storage, we can use both way as the
 situation demands.

 Ex.

   As for Standard type storage, use proposed software based LVM cinder
 driver.

   As for High performance type storage, use hardware based cinder driver.



 As a result, we can offload the workload of standard type storage from
 physical storage to cinder host.



 1. The admin has to manually provision a very big volume and attach it
 to the Nova and Cinder hosts.

   Every time a host is rebooted,



 I thinks current FC-based cinder drivers using scsi scan to find created
 LU.

 # echo - - -  /sys/class/scsi_host/host#/scan



 The admin can find additional LU using this, so host reboot are not
 required.



  or introduced, the admin must do manual work. This is one of the things
 OpenStack should be trying

  to avoid. This can't be automated without a driver, which is what
 you're trying to avoid.



 Yes. Some admin manual work is required and can’t be automated.

 I would like to know whether these operations are acceptable range to
 enjoy benefits from

 my proposed driver.



 2. You lose on performance to volumes by adding another layer in the
 stack.



 I think this is case by case.  When user use a cinder volume for DATA BASE,
 they prefer

 raw volume and proposed driver can’t provide raw cinder volume.

 In this case, I recommend High performance type storage.



 LVM is a default feature in many Linux distribution. Also LVM is used
 many enterprise

 systems and I think there is not critical performance loss.



 3. You lose performance with snapshots - appliances will almost
 certainly have more efficient snapshots

  than LVM over network (consider that for every COW operation, you are
 reading synchronously over the network).

  (Basically, you turned your fully-capable storage appliance into a dumb
 JBOD)



 I agree that storage has efficient COW snapshot feature, so we can create
 new Boot Volume

 from glance quickly. In this case, I recommend High performance type
 storage.

 LVM can’t create nested snapshot with shared LVM now. Therefore, we can’t
 assign

 writable LVM snapshot to instances.



 Is this answer for your comment?



  In short, I think the cons outweigh the pros.  Are there people
 deploying OpenStack who would deploy

  their storage like this?



 Please consider above main benefit.



 Regards,

 Mitsuhiro Tanino mitsuhiro.tan...@hds.com

  *HITACHI DATA SYSTEMS*

  c/o Red Hat, 314 Littleton Road, Westford, MA 01886



 *From:* Avishay Traeger [mailto:avis...@stratoscale.com]
 *Sent:* Wednesday, May 21, 2014 4:36 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Tomoki Sekiyama
 *Subject:* Re: [openstack-dev] [Cinder] Support LVM on a shared LU



 So the way I see it, the value here is a generic driver that can work with
 any storage.  The downsides:

 1. The admin has to manually provision a very big volume and attach it to
 the Nova and Cinder hosts.  Every time a host is rebooted, or introduced,
 the admin must do manual work. This is one of the things OpenStack should
 be trying to avoid. This can't be automated without a driver, which is what
 you're trying to avoid.

 2. You lose on performance to volumes by adding another layer in the stack.

 3. You lose performance with snapshots - appliances will almost certainly
 have more efficient snapshots than LVM over network (consider that for
 every COW operation, you are reading synchronously over the network).



 (Basically, you turned your fully-capable storage appliance into a dumb
 JBOD)



 In short, I think the cons outweigh the pros.  Are there people deploying
 OpenStack who would deploy their storage like this?



 Thanks

Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-23 Thread Mitsuhiro Tanino
Hi Avishay-san,

Thank you for your review and comments for my proposal. I commented in-line.

So the way I see it, the value here is a generic driver that can work with 
any storage.  The downsides:

A generic ­driver for any storage is an one of benefit.
But main benefit of proposed driver is as follows.
- Reduce hardware based storage workload by offloading the workload to software 
based volume operation.

Conventionally, operations to an enterprise storage such as volume creation, 
deletion, snapshot, etc
are only permitted system administrator and they handle these operations after 
carefully examining.
In OpenStack cloud environment, every user have a permission to execute these 
storage operations
via cinder. As a result, workloads of storages have been increasing and it is 
difficult to manage
the workloads.

If we have two drivers in regards to a storage, we can use both way as the 
situation demands.
Ex.
  As for Standard type storage, use proposed software based LVM cinder driver.
  As for High performance type storage, use hardware based cinder driver.

As a result, we can offload the workload of standard type storage from physical 
storage to cinder host.

1. The admin has to manually provision a very big volume and attach it to the 
Nova and Cinder hosts.
  Every time a host is rebooted,

I thinks current FC-based cinder drivers using scsi scan to find created LU.
# echo - - -  /sys/class/scsi_host/host#/scan

The admin can find additional LU using this, so host reboot are not required.

 or introduced, the admin must do manual work. This is one of the things 
 OpenStack should be trying
 to avoid. This can't be automated without a driver, which is what you're 
 trying to avoid.

Yes. Some admin manual work is required and can’t be automated.
I would like to know whether these operations are acceptable range to enjoy 
benefits from
my proposed driver.

2. You lose on performance to volumes by adding another layer in the stack.

I think this is case by case.  When user use a cinder volume for DATA BASE, 
they prefer
raw volume and proposed driver can’t provide raw cinder volume.
In this case, I recommend High performance type storage.

LVM is a default feature in many Linux distribution. Also LVM is used many 
enterprise
systems and I think there is not critical performance loss.

3. You lose performance with snapshots - appliances will almost certainly 
have more efficient snapshots
 than LVM over network (consider that for every COW operation, you are 
 reading synchronously over the network).
 (Basically, you turned your fully-capable storage appliance into a dumb JBOD)

I agree that storage has efficient COW snapshot feature, so we can create new 
Boot Volume
from glance quickly. In this case, I recommend High performance type storage.
LVM can’t create nested snapshot with shared LVM now. Therefore, we can’t assign
writable LVM snapshot to instances.

Is this answer for your comment?

 In short, I think the cons outweigh the pros.  Are there people deploying 
 OpenStack who would deploy
 their storage like this?

Please consider above main benefit.

Regards,
Mitsuhiro Tanino mitsuhiro.tan...@hds.com
 HITACHI DATA SYSTEMS
 c/o Red Hat, 314 Littleton Road, Westford, MA 01886

From: Avishay Traeger [mailto:avis...@stratoscale.com]
Sent: Wednesday, May 21, 2014 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Tomoki Sekiyama
Subject: Re: [openstack-dev] [Cinder] Support LVM on a shared LU

So the way I see it, the value here is a generic driver that can work with any 
storage.  The downsides:
1. The admin has to manually provision a very big volume and attach it to the 
Nova and Cinder hosts.  Every time a host is rebooted, or introduced, the admin 
must do manual work. This is one of the things OpenStack should be trying to 
avoid. This can't be automated without a driver, which is what you're trying to 
avoid.
2. You lose on performance to volumes by adding another layer in the stack.
3. You lose performance with snapshots - appliances will almost certainly have 
more efficient snapshots than LVM over network (consider that for every COW 
operation, you are reading synchronously over the network).

(Basically, you turned your fully-capable storage appliance into a dumb JBOD)

In short, I think the cons outweigh the pros.  Are there people deploying 
OpenStack who would deploy their storage like this?

Thanks,
Avishay
On Tue, May 20, 2014 at 6:31 PM, Mitsuhiro Tanino 
mitsuhiro.tan...@hds.commailto:mitsuhiro.tan...@hds.com wrote:
Hello All,

I’m proposing a feature of LVM driver to support LVM on a shared LU.
The proposed LVM volume driver provides these benefits.
  - Reduce hardware based storage workload by offloading the workload to 
software based volume operation.
  - Provide quicker volume creation and snapshot creation without storage 
workloads.
  - Enable cinder to any kinds of shared storage volumes without specific 
cinder storage driver

Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-21 Thread Avishay Traeger
So the way I see it, the value here is a generic driver that can work with
any storage.  The downsides:
1. The admin has to manually provision a very big volume and attach it to
the Nova and Cinder hosts.  Every time a host is rebooted, or introduced,
the admin must do manual work. This is one of the things OpenStack should
be trying to avoid. This can't be automated without a driver, which is what
you're trying to avoid.
2. You lose on performance to volumes by adding another layer in the stack.
3. You lose performance with snapshots - appliances will almost certainly
have more efficient snapshots than LVM over network (consider that for
every COW operation, you are reading synchronously over the network).

(Basically, you turned your fully-capable storage appliance into a dumb
JBOD)

In short, I think the cons outweigh the pros.  Are there people deploying
OpenStack who would deploy their storage like this?

Thanks,
Avishay

On Tue, May 20, 2014 at 6:31 PM, Mitsuhiro Tanino
mitsuhiro.tan...@hds.comwrote:

  Hello All,



 I’m proposing a feature of LVM driver to support LVM on a shared LU.

 The proposed LVM volume driver provides these benefits.
   - Reduce hardware based storage workload by offloading the workload to
 software based volume operation.
   - Provide quicker volume creation and snapshot creation without storage
 workloads.
   - Enable cinder to any kinds of shared storage volumes without specific
 cinder storage driver.

   - Better I/O performance using direct volume access via Fibre channel.



 In the attachment pdf, following contents are explained.

   1. Detail of Proposed LVM volume driver

   1-1. Big Picture

   1-2. Administrator preparation

   1-3. Work flow of volume creation and attachment

   2. Target of Proposed LVM volume driver

   3. Comparison of Proposed LVM volume driver



 Could you review the attachment?

 Any comments, questions, additional ideas would be appreciated.





 Also there are blueprints, wiki and patches related to the slide.

 https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage

 https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage


 https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder

 https://review.openstack.org/#/c/92479/

 https://review.openstack.org/#/c/92443/



 Regards,

 Mitsuhiro Tanino mitsuhiro.tan...@hds.com

  *HITACHI DATA SYSTEMS*

  c/o Red Hat, 314 Littleton Road, Westford, MA 01886

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev