richard-iovanisci opened a new issue, #42125: URL: https://github.com/apache/airflow/issues/42125
### Apache Airflow Provider(s) hashicorp ### Versions of Apache Airflow Providers 3.7.1 ### Apache Airflow version 2.9.3 ### Operating System Linux/UNIX ### Deployment Official Apache Airflow Helm Chart ### Deployment details EKS 1.28 ### What happened With the above version of the provider, the `role_id` parameter is not correctly passed to the `iam_login` function of the hvac client when an IAM role is used to dynamically fetch temporary credentials. This causes a relative path not supported error as it ultimately causes a required parameter (`role_id`) to be missing from the login POST. as seen [here](https://github.com/apache/airflow/blob/main/airflow/providers/hashicorp/_internal_client/vault_client.py#L327-L357): ```python def _auth_aws_iam(self, _client: hvac.Client) -> None: if self.key_id and self.secret_id: auth_args = { "access_key": self.key_id, "secret_key": self.secret_id, "role": self.role_id, } else: import boto3 if self.assume_role_kwargs: sts_client = boto3.client("sts") credentials = sts_client.assume_role(**self.assume_role_kwargs) auth_args = { "access_key": credentials["Credentials"]["AccessKeyId"], "secret_key": credentials["Credentials"]["SecretAccessKey"], "session_token": credentials["Credentials"]["SessionToken"], } else: session = boto3.Session() credentials = session.get_credentials() auth_args = { "access_key": credentials.access_key, "secret_key": credentials.secret_key, "session_token": credentials.token, } if self.auth_mount_point: auth_args["mount_point"] = self.auth_mount_point _client.auth.aws.iam_login(**auth_args) ``` The `role_id` parameter makes it into the `auth_args` dict ONLY if a static key and secret access key are provided. Otherwise, temporary credentials are fetched using `sts` or `get_credentials()` and added to the `auth_args` dict, and later the `mount_point` is added, but `role_id` is ultimately missing. This will always cause an issue when trying to auth to vault with 1aws_iam` using dynamic credentials since BOTH mount point and role id are required: see [here](https://github.com/hvac/hvac/blob/main/hvac/api/auth_methods/aws.py#L739-L790) if interested. This was introduced by the following PR when support for this sort of dynamic credential usage was implemented (though probably never actually tested w/o a static key + access key): https://github.com/apache/airflow/pull/38536/files --- Also, there is a bad message in the UI that the role id parameter for the vault connection is deprecated, which is only true for the `approle` auth method... it is REQUIRED for the `aws_iam` auth method. Since a deprecation warning will be thrown when this parameter is used for the `approle` auth method anyway, I suggect removing that text from the UI entirely. All of these are very simple changes and I am willing to submit a PR... the fix has already been tested in a hotfix environment. ### What you think should happen instead There should be no relative path error thrown when dynamic credentials are used. The `role_id` parameter should be added to the `auth_args` dict and login should succeed. ### How to reproduce Try to instantiate a VaultHook or Vault Secrets Backend using `aws_iam` auth and do not provide static access credentials. If all of the config is correct, you will see a relative path error in the logs instead of successful auth to vault. This requires both and airflow setup and a vault namespace configured with access provisioned through iam. ### Anything else This problem occurs every time. Again, we have the fix in out hotfix environment and are willing to submit the fix. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
