Re: [openstack-dev] [Manila] Question to driver maintainers

2015-05-21 Thread Jason Bishop
Hi Igor, Hitachi SOP drive can support share extending online without
disruption.

cheers
jason


On Mon, May 18, 2015 at 1:15 AM, Igor Malinovskiy  wrote:

> Hello, everyone!
>
> My letter is mainly addressed to driver maintainers, but could be
> interesting to everybody.
>
> As you probably know, on Kilo midcycle meetup we discussed share resize
> functionality (extend and shrink) and I already have implemented 'extend'
> API in Generic driver (https://review.openstack.org/182383/). After
> implementation review we
>
> noticed that some backends are able to resize a share without causing
> disruptions, but others might only be able to do it disruptively (Generic
> driver case).
>
> So I want to ask driver maintainers here:
>
> Will your driver be able to do share extending without loss of
> connectivity?
>
> Depending on your answers, we will handle this situation differently.
>
> Best regards,
>
> Igor Malinovskiy (IRC: u_glide)
> Manila Core Team
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] two level share scheduling

2015-02-10 Thread Jason Bishop
Hi manila, I would like to broach the subject of share load balancing.
Currently the share server for an (in this case) NFS share that is newly
created is determined at share creation time.  In this proposal, the share
server is determined "late binding style" at mount-time instead.

For the sake of discussion, lets call the proposed idea "two-level share
scheduling".

TL;DR remove share server from export_location in database and query the
driver for this at mount-time


First, a quick description of current behavior:


When a share is created (from scratch), the manila scheduler identifies a
share server from its list of backends and makes an api call to
create_share method in the appropriate driver.  The driver executes the
required steps and returns the export_location which is then written to the
database.


For example, this create command:

$ manila create --name myshare --share-network
fb7ea7de-19fb-4650-b6ac-16f918e66d1d NFS 1


would result in this

$ manila list

+--+-+--+-+---+-+---+-+

| ID   | Name| Size | Share Proto |
Status| Volume Type | Export location
| Host|

+--+-+--+-+---+-+---+-+

| 6d6f57f2-3ac5-46c1-ade4-2e9d48776e21 | myshare | 1| NFS |
available | None|
10.254.0.3:/shares/share-6d6f57f2-3ac5-46c1-ade4-2e9d48776e21
| jasondevstack@generic1#GENERIC1 |

+--+-+--+-+---+-+---+-+


with this associated database record:


mysql> select * from shares\G

*** 1. row ***

 created_at: 2015-02-10 07:06:21

 updated_at: 2015-02-10 07:07:25

 deleted_at: NULL

deleted: False

 id: 6d6f57f2-3ac5-46c1-ade4-2e9d48776e21

user_id: 848b808e91e5462f985b6131f8a905e8

 project_id: ed01cbf358f74ff08263f9672b2cdd01

   host: jasondevstack@generic1#GENERIC1

   size: 1

  availability_zone: nova

 status: available

   scheduled_at: 2015-02-10 07:06:21

launched_at: 2015-02-10 07:07:25

  terminated_at: NULL

   display_name: myshare

display_description: NULL

snapshot_id: NULL

   share_network_id: fb7ea7de-19fb-4650-b6ac-16f918e66d1d

share_server_id: c2602adb-0602-4128-9d1c-4024024a069a

share_proto: NFS

export_location: 10.254.0.3:
/shares/share-6d6f57f2-3ac5-46c1-ade4-2e9d48776e21

 volume_type_id: NULL

1 row in set (0.00 sec)




Proposed scheme:

The proposal is simple in concept.  Instead of the driver
(GenericShareDriver for example) returning both share server ip address and
path in share export_location, only the path is returned and saved in the
database.  The binding of the share server ip address is only determined at
share mount time.  In practical terms this means share server is determined
by an api call to the driver when _get_shares is called.  The driver would
then have the option of determining which IP address from its basket of
addresses to return.  In this way, each share mount event presents an
opportunity for the NFS traffic to be balanced over all available network
endpoints.

A possible signature for this new call might look like this (with the
GenericShareDriver having the simple implementation of return server[
'public_address']):


def get_share_server_address(self, ctx, share, share_server):

"""Return the IP address of a share server for given share, given
current ."""

# implementation dependent logic to determine IP address

address = self._myownloadfilter()

return address



Off the top of my head I see potential uses including:

o balance load over several glusterfs servers

o balance load over several NFS/CIFS share servers which have multiple
NICS

o balance load over several generic share servers which are exporting
read-only volumes (such as software repositories)

o i think isilon should also benefit but I will defer to somebody more
knowledgable on the subject


I see following cons:

   o slow manila list performance

   o very slow manila list performance if all share drivers are busy doing
long operations such as create/delete share


Interested in your thoughts

Jason
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.