[Resending my response as unknown forces ate my original message]

On 08/20/2015 08:30 AM, Bjorn Schuberg wrote:
Hello everyone,

this is my first thread on this mailing list, and I would like to take the opportunity to say that it was great to see you all at the midcycle, even if remote.

Now, to my question; I've been looking into an issue that arise when deleting access to a share, and then momentarily after, deleting the same share. The delete fails due to a race in `_remove_share_access_rules` in the `ShareManager`, which attempts to delete all granted permissions on the share before removing it, but one of the access permissions is concurrently deleted due to the first API call, see;
https://github.com/openstack/manila/blob/master/manila/share/manager.py#L600

I think an acceptable fix to this would be to wrap the `_deny_access` call with a `try`... `except` block, and log any attempts to remove non-existing permissions. The problem is that there seems to be no contract on the exception to throw in case you attempt to delete an `access` which does not exist -- each driver behaves differently.

This got my attention after running the tempest integration tests, where the teardown /sometimes/ fails in tempest.api.share.test_rules:ShareIpRulesForNFSTest.

Any thoughts on this? Perhaps there is a smoother approach that I'm not seeing.

This is a good point. I'm actually interested in purusing a deeper overhaul of the allow/deny access logic for Mitaka which will make access rules less error prone in my opinion. I'm open to short term bug fixes in Liberty for problems like the one you mention, but I'm already planning a session in Tokyo about a new share access driver interface. The reason it has to wait until Mitaka is that all of the drivers will need to change their logic to accommodate the new method.

My thinking on access rules is that the driver interface which adds and removes rules one at a time is too fragile, and assumes too much about what backends are capable of supporting. I see the following problems (in addition to the one you mention): * If addition or deletion of a rules fails for any reason, the set of rules on the backend starts to differ from what the user intended and there is no way to go back and correct the problem. * If backends aren't able to implement rules exactly as Manila expects, (perhaps a backend does not support nested subnets with identical rules) then there are situations where a certain set of user actions will be guaranteed to result in broken rules. Consider (1) add rw access to 10.0.0.0/8 (2) Add rw access to 10.10.0.0/16 (3) Remove rw access to 10.0.0.0/8 (4) Try to access share from 10.10.10.10. If step (2) fails because the backend ignored that rule (it was redundant at the time it was added) then step (4) will also fail, even though it shouldn't. * The current mechanism doesn't allow making multiple changes atomically -- changes have to be sequential. This will case problems if we want to allow access rules to be defined externally (something which was discussed during Juno and is still desirable) because changes to access rules may come in batches.

My proposal is simple. Rather than making drivers implement allow_access() and deny_access(), driver should implement a single set_access() which gets passed a list of all the access rules. Drivers would be required to compare the list of rules passed in from the manager to the list of rules on the storage controller and make changes as appropriate. For some drivers this would be more work but for other drivers it would be less work. Overall I think it's a better design. We can probably implement some kind of backwards compatibility to avoid breaking drivers during the migration to the new interface.

It's not something I intend to push for in Liberty however.

-Ben


Cheers,
Björn


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to