Re: [openstack-dev] [manila] two level share scheduling

2015-02-10 Thread Ben Swartzlander


On 02/10/2015 06:14 AM, Valeriy Ponomaryov wrote:

Hello Jason,

On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop > wrote:


When a share is created (from scratch), the manila scheduler
identifies a share server from its list of backends and makes an
api call to create_share method in the appropriate driver.  The
driver executes the required steps and returns the export_location
which is then written to the database.

It is not correct description of current approach. Scheduler handles 
only capabilities and extra specs, and there is no logic for filtering 
based on share servers for the moment.

Correct would be following:
Scheduler (manila-scheduler) chooses host, then sends request "create 
share" to chosen manila-share service, which handles all stuff related 
to share servers based on share driver logic.


This is something I'd like to change. The scheduler should know about 
where the existing (usable) share servers are, and should be able to 
prefer a backend with an existing share server over a backend with no 
existing share server for share types that would require share servers. 
The administrator should control how strongly to weigh this information 
within the scheduler.


On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop > wrote:


Proposed scheme:

The proposal is simple in concept.  Instead of the driver
(GenericShareDriver for example) returning both share server ip
address and path in share export_location, only the path is
returned and saved in the database.  The binding of the share
server ip address is only determined at share mount time.  In
practical terms this means share server is determined by an api
call to the driver when _get_shares is called.  The driver would
then have the option of determining which IP address from its
basket of addresses to return.  In this way, each share mount
event presents an opportunity for the NFS traffic to be balanced
over all available network endpoints.

It is specific only to GenericShareDriver and mentioned public IP 
address is used once for combining export_location from path and this 
IP. Other share drivers do not store it and not forced to do it at 
all. For example, in case of share driver for NetApp Clustered Data 
OnTap stored only one specific information, it is name of vserver. IP 
address is taken each time via API of backend.


It is true, that now we have possibility to store only one export 
location. I agree, that it would be suitable to have more than one 
export_location. So, idea of having multiple export_locations is good.


We absolutely need multiple export locations. But I want that feature 
for other reasons than what Jason mentions. Typically, load balancing 
can be achieved by in-band techniques such as pNFS which only needs one 
export location to get started. The main value of the multiple export 
locations for me is to cover cases when a client wants to perform a 
mount during a failover event when one or more export locations are 
temporarily unreachable.





On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop > wrote:


I see following cons:

   o slow manila list performance

   o very slow manila list performance if all share drivers are
busy doing long operations such as create/delete share

First of all, manila-api service does know nothing about share drivers 
or backends, that are meanings of different service/process - 
manila-share, manila-api uses DB for getting information.
So, you just can not ask share drivers with "list" API call. API 
either reads DB and returns something or sends some RPC and returns 
some data and does not expect result of RPC.
If you want to change IP addresses, then you need to update DB with 
it. Hence, it looks like requirement to have "periodic" task, that 
will do it continuously.


Yes. Changing IP addresses would be a big problem because right now 
manila doesn't provide a way for the driver to alter the export location 
after the share is created.


I prefer to have more than one export locations and allow users to 
chose any of them. Also I assume possibility when IP addresses just 
changed, in this case we should allow to update export locations.


And second, if we implement multiple export locations for shares, 
better to not return it within "list" API response and do it only 
within "get" requests.


Agreed.


Valeriy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
h

Re: [openstack-dev] [manila] two level share scheduling

2015-02-10 Thread Valeriy Ponomaryov
Hello Jason,

On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop 
wrote:
>
> When a share is created (from scratch), the manila scheduler identifies a
> share server from its list of backends and makes an api call to
> create_share method in the appropriate driver.  The driver executes the
> required steps and returns the export_location which is then written to the
> database.
>
It is not correct description of current approach. Scheduler handles only
capabilities and extra specs, and there is no logic for filtering based on
share servers for the moment.
Correct would be following:
Scheduler (manila-scheduler) chooses host, then sends request "create
share" to chosen manila-share service, which handles all stuff related to
share servers based on share driver logic.

On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop 
 wrote:
>
> Proposed scheme:
>
> The proposal is simple in concept.  Instead of the driver
> (GenericShareDriver for example) returning both share server ip address and
> path in share export_location, only the path is returned and saved in the
> database.  The binding of the share server ip address is only determined at
> share mount time.  In practical terms this means share server is determined
> by an api call to the driver when _get_shares is called.  The driver would
> then have the option of determining which IP address from its basket of
> addresses to return.  In this way, each share mount event presents an
> opportunity for the NFS traffic to be balanced over all available network
> endpoints.
>
It is specific only to GenericShareDriver and mentioned public IP address
is used once for combining export_location from path and this IP. Other
share drivers do not store it and not forced to do it at all. For example,
in case of share driver for NetApp Clustered Data OnTap stored only one
specific information, it is name of vserver. IP address is taken each time
via API of backend.

It is true, that now we have possibility to store only one export location.
I agree, that it would be suitable to have more than one export_location.
So, idea of having multiple export_locations is good.


On Tue, Feb 10, 2015 at 10:07 AM, Jason Bishop 
 wrote:
>
> I see following cons:
>
>o slow manila list performance
>
>o very slow manila list performance if all share drivers are busy doing
> long operations such as create/delete share
>
First of all, manila-api service does know nothing about share drivers or
backends, that are meanings of different service/process - manila-share,
manila-api uses DB for getting information.
So, you just can not ask share drivers with "list" API call. API either
reads DB and returns something or sends some RPC and returns some data and
does not expect result of RPC.
If you want to change IP addresses, then you need to update DB with it.
Hence, it looks like requirement to have "periodic" task, that will do it
continuously.

I prefer to have more than one export locations and allow users to chose
any of them. Also I assume possibility when IP addresses just changed, in
this case we should allow to update export locations.

And second, if we implement multiple export locations for shares, better to
not return it within "list" API response and do it only within "get"
requests.

Valeriy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] two level share scheduling

2015-02-10 Thread Jason Bishop
Hi manila, I would like to broach the subject of share load balancing.
Currently the share server for an (in this case) NFS share that is newly
created is determined at share creation time.  In this proposal, the share
server is determined "late binding style" at mount-time instead.

For the sake of discussion, lets call the proposed idea "two-level share
scheduling".

TL;DR remove share server from export_location in database and query the
driver for this at mount-time


First, a quick description of current behavior:


When a share is created (from scratch), the manila scheduler identifies a
share server from its list of backends and makes an api call to
create_share method in the appropriate driver.  The driver executes the
required steps and returns the export_location which is then written to the
database.


For example, this create command:

$ manila create --name myshare --share-network
fb7ea7de-19fb-4650-b6ac-16f918e66d1d NFS 1


would result in this

$ manila list

+--+-+--+-+---+-+---+-+

| ID   | Name| Size | Share Proto |
Status| Volume Type | Export location
| Host|

+--+-+--+-+---+-+---+-+

| 6d6f57f2-3ac5-46c1-ade4-2e9d48776e21 | myshare | 1| NFS |
available | None|
10.254.0.3:/shares/share-6d6f57f2-3ac5-46c1-ade4-2e9d48776e21
| jasondevstack@generic1#GENERIC1 |

+--+-+--+-+---+-+---+-+


with this associated database record:


mysql> select * from shares\G

*** 1. row ***

 created_at: 2015-02-10 07:06:21

 updated_at: 2015-02-10 07:07:25

 deleted_at: NULL

deleted: False

 id: 6d6f57f2-3ac5-46c1-ade4-2e9d48776e21

user_id: 848b808e91e5462f985b6131f8a905e8

 project_id: ed01cbf358f74ff08263f9672b2cdd01

   host: jasondevstack@generic1#GENERIC1

   size: 1

  availability_zone: nova

 status: available

   scheduled_at: 2015-02-10 07:06:21

launched_at: 2015-02-10 07:07:25

  terminated_at: NULL

   display_name: myshare

display_description: NULL

snapshot_id: NULL

   share_network_id: fb7ea7de-19fb-4650-b6ac-16f918e66d1d

share_server_id: c2602adb-0602-4128-9d1c-4024024a069a

share_proto: NFS

export_location: 10.254.0.3:
/shares/share-6d6f57f2-3ac5-46c1-ade4-2e9d48776e21

 volume_type_id: NULL

1 row in set (0.00 sec)




Proposed scheme:

The proposal is simple in concept.  Instead of the driver
(GenericShareDriver for example) returning both share server ip address and
path in share export_location, only the path is returned and saved in the
database.  The binding of the share server ip address is only determined at
share mount time.  In practical terms this means share server is determined
by an api call to the driver when _get_shares is called.  The driver would
then have the option of determining which IP address from its basket of
addresses to return.  In this way, each share mount event presents an
opportunity for the NFS traffic to be balanced over all available network
endpoints.

A possible signature for this new call might look like this (with the
GenericShareDriver having the simple implementation of return server[
'public_address']):


def get_share_server_address(self, ctx, share, share_server):

"""Return the IP address of a share server for given share, given
current ."""

# implementation dependent logic to determine IP address

address = self._myownloadfilter()

return address



Off the top of my head I see potential uses including:

o balance load over several glusterfs servers

o balance load over several NFS/CIFS share servers which have multiple
NICS

o balance load over several generic share servers which are exporting
read-only volumes (such as software repositories)

o i think isilon should also benefit but I will defer to somebody more
knowledgable on the subject


I see following cons:

   o slow manila list performance

   o very slow manila list performance if all share drivers are busy doing
long operations such as create/delete share


Interested in your thoughts

Jason
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.