[
https://issues.apache.org/jira/browse/HDDS-9663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Attila Doroszlai resolved HDDS-9663.
------------------------------------
Fix Version/s: 1.4.0
Resolution: Fixed
> Incorrect user info data in OMRequest
> -------------------------------------
>
> Key: HDDS-9663
> URL: https://issues.apache.org/jira/browse/HDDS-9663
> Project: Apache Ozone
> Issue Type: Bug
> Components: OM
> Affects Versions: 1.4.0
> Reporter: Vyacheslav Tutrinov
> Assignee: Vyacheslav Tutrinov
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.4.0
>
>
> There is a specific case on OM side when the ACLs check operation will fail
> if the {{ozone.acl.authorizer.class}} configuration property was set to
> {{org.apache.ranger.authorization.ozone.authorizer.RangerOzoneAuthorizer}}
> and the *testuser* has READ/WRITE permissions for all volumes, bucket, and
> keys.
> 1. Create the bucket inside the custom volume and link to it from the
> *testuser*
> {code:bash}
> kinit -kt /etc/security/keytabs/testuser.keytab testuser/[email protected]
> ozone sh volume create data
> ozone sh bucket create data/bucket1
> ozone sh bucket link data/bucket1 s3v/bucket1
> {code}
> 2. Try to put the key to the bucket through s3g (e.g. 1MiB file)
> {code:bash}
> head -c 1MiB </dev/urandom > /tmp/test_file
> aws --debug s3api --endpoint http://s3g:9878 put-object --bucket bucket1
> --body /tmp/test_file --key test_file_001
> {code}
> The last command (when om will handle OMKeyCommitRequest) will fail with the
> exception on the OM side that the user 'om' doesn't have READ permission to
> access volume. I've investigated of the problem and found out that the
> original user principal (testuser/[email protected]) was replaced by 'om' one
> and the hostname and ipAddress of the UserInfo object of omRequest were set
> to the ones from the OM host that handles the OMCommitKeyRequest:
> 1. org.apache.hadoop.ozone.om.request.key.OMKeyCommitRequest#preExecute calls
> the parent's preExecute method
> (org.apache.hadoop.ozone.om.request.OMClientRequest#preExecute)
> 2. The org.apache.hadoop.ozone.om.request.OMClientRequest#preExecute method
> tries to set the userInfo for the current OMRequest object:
> {code:java}
> omRequest = getOmRequest().toBuilder()
> .setUserInfo(getUserIfNotExists(ozoneManager))
> .setLayoutVersion(layoutVersion).build();
> ...
> public OzoneManagerProtocolProtos.UserInfo getUserIfNotExists(
> OzoneManager ozoneManager) throws IOException {
> OzoneManagerProtocolProtos.UserInfo userInfo = getUserInfo();
> if (!userInfo.hasRemoteAddress() || !userInfo.hasUserName()) {
> OzoneManagerProtocolProtos.UserInfo.Builder newuserInfo =
> OzoneManagerProtocolProtos.UserInfo.newBuilder();
> UserGroupInformation user;
> InetAddress remoteAddress;
> try {
> user = UserGroupInformation.getCurrentUser();
> remoteAddress = ozoneManager.getOmRpcServerAddr()
> .getAddress();
> } catch (Exception e) {
> LOG.debug("Couldn't get om Rpc server address", e);
> return getUserInfo();
> }
> newuserInfo.setUserName(user.getUserName());
> newuserInfo.setHostName(remoteAddress.getHostName());
> newuserInfo.setRemoteAddress(remoteAddress.getHostAddress());
> return newuserInfo.build();
> }
> return getUserInfo();
> }
> public OzoneManagerProtocolProtos.UserInfo getUserInfo() throws IOException {
> UserGroupInformation user = ProtobufRpcEngine.Server.getRemoteUser();
> InetAddress remoteAddress = ProtobufRpcEngine.Server.getRemoteIp();
> OzoneManagerProtocolProtos.UserInfo.Builder userInfo =
> OzoneManagerProtocolProtos.UserInfo.newBuilder();
> // If S3 Authentication is set, determine user based on access ID.
> if (omRequest.hasS3Authentication()) {
> String principal = OzoneAclUtils.accessIdToUserPrincipal(
> omRequest.getS3Authentication().getAccessId());
> userInfo.setUserName(principal);
> } else if (user != null) {
> // Added not null checks, as in UT's these values might be null.
> userInfo.setUserName(user.getUserName());
> }
> // for gRPC s3g omRequests that contain user name
> if (user == null && omRequest.hasUserInfo()) {
> userInfo.setUserName(omRequest.getUserInfo().getUserName());
> }
> if (remoteAddress != null) {
> userInfo.setHostName(remoteAddress.getHostName());
> userInfo.setRemoteAddress(remoteAddress.getHostAddress()).build();
> }
> return userInfo.build();
> }
> {code}
> The problem is in the first 2 lines of the {{getUserInfo}} method - with the
> GRPC transport the values will be null and the following check of the got
> userInfo ({{!userInfo.hasRemoteAddress() || !userInfo.hasUserName()}}) will
> pass and the username will be replaced by 'om':
> {code:java}
> user = UserGroupInformation.getCurrentUser();
> {code}
> So, the original user (testuser) was replaced by 'om', the user host and
> ipAddress was defined as the host and ip of the OM instance that handles the
> request> Let's go on.
> The
> {{org.apache.hadoop.ozone.om.request.key.OMKeyCommitRequest#validateAndUpdateCache}}
> method checks key acls in the open key table:
> {code:java}
> checkKeyAclsInOpenKeyTable(ozoneManager, volumeName, bucketName,
> keyName, IAccessAuthorizer.ACLType.WRITE,
> commitKeyRequest.getClientID());
> |
> |
> (OMKeyRequest)
> checkKeyAcls(ozoneManager, volume, bucket, keyNameForAclCheck,
> aclType, OzoneObj.ResourceType.KEY);
> |
> |
> (OMKeyRequest)
> checkAcls(ozoneManager, resourceType, OzoneObj.StoreType.OZONE, aclType,
> volume, bucket, key);
> |
> |
> (OMKeyRequest)
> checkAcls(ozoneManager, resType, storeType, aclType, vol, bucket, key,
> ozoneManager.getVolumeOwner(vol, aclType, resType),
> ozoneManager.getBucketOwner(vol, bucket, aclType, resType));
> |
> |
> (OMKeyRequest)
> OzoneAclUtils.checkAllAcls((OmMetadataReader) rcMetadataReader.get(),
> resType, storeType, aclType,
> vol, bucket, key, volOwner, bucketOwner, createUGIForApi(),
> getRemoteAddress(), getHostName());
> |
> |
> (OzoneAclUtils)
> IAccessAuthorizer.ACLType parentAclRight =
> IAccessAuthorizer.ACLType.READ;
> // OzoneNativeAuthorizer differs from Ranger Authorizer as Ranger
> // requires only READ access on parent level access.
> // OzoneNativeAuthorizer has different parent level access based on the
> // child level access type.
> if (omMetadataReader.isNativeAuthorizerEnabled() && resType == BUCKET) {
> parentAclRight = getParentNativeAcl(aclType, resType);
> }
> omMetadataReader.checkAcls(OzoneObj.ResourceType.VOLUME, storeType,
> parentAclRight, vol, bucket, key, user,
> remoteAddress, hostName, true,
> volOwner);
> |
> |
> (OzoneMetadataReader)
> checkAcls(obj, context, throwIfPermissionDenied);
> |
> |
> (OzoneMetadataReader)
> if (!captureLatencyNs(perfMetrics::setCheckAccessLatencyNs,
> () -> accessAuthorizer.checkAccess(obj, context))) { //Here the ACL
> request will be sent to Ranger to check access and return false (the
> exception below will be thrown as a result)
> if (throwIfPermissionDenied) {
> String volumeName = obj.getVolumeName() != null ?
> "Volume:" + obj.getVolumeName() + " " : "";
> String bucketName = obj.getBucketName() != null ?
> "Bucket:" + obj.getBucketName() + " " : "";
> String keyName = obj.getKeyName() != null ?
> "Key:" + obj.getKeyName() : "";
> log.warn("User {} doesn't have {} permission to access {} {}{}{}",
> context.getClientUgi().getShortUserName(), context.getAclRights(),
> obj.getResourceType(), volumeName, bucketName, keyName);
> throw new OMException(
> "User " + context.getClientUgi().getShortUserName() +
> " doesn't have " + context.getAclRights() +
> " permission to access " + obj.getResourceType() + " " +
> volumeName + bucketName + keyName,
> ResultCodes.PERMISSION_DENIED);
> }
> return false;
> }
> {code}
> One of the possible solution is to provide the client IP address and the
> hostname as header values for GRPC requests (it's need to implement client
> and server interceptors to support the feature and execute the requests with
> provided data from the client side)
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]