sunank200 opened a new issue, #31551: URL: https://github.com/apache/airflow/issues/31551
### Apache Airflow version 2.6.1 ### What happened This PR [](https://github.com/apache/airflow/pull/28187) introduced the get_iam_token method in `redshift_sql.py`. This is the breaking change as introduces the check for `iam` in extras, and it's set to False by default. Error log: ``` self = <airflow.providers.amazon.aws.hooks.redshift_sql.RedshiftSQLHook object at 0x7f29f7c208e0> conn = redshift_default def get_iam_token(self, conn: Connection) -> tuple[str, str, int]: """ Uses AWSHook to retrieve a temporary ***word to connect to Redshift. Port is required. If none is provided, default is used for each service """ port = conn.port or 5439 # Pull the custer-identifier from the beginning of the Redshift URL # ex. my-cluster.ccdre4hpd39h.us-east-1.redshift.amazonaws.com returns my-cluster > cluster_identifier = conn.extra_dejson.get("cluster_identifier", conn.host.split(".")[0]) E AttributeError: 'NoneType' object has no attribute 'split' .nox/test-3-8-airflow-2-6-0/lib/python3.8/site-packages/airflow/providers/amazon/aws/hooks/redshift_sql.py:107: AttributeError ``` ### What you think should happen instead It should have backward compatibility ### How to reproduce Run an example DAG for redshift with the AWS IAM profile given at hook initialization to retrieve a temporary password to connect to Amazon Redshift. ### Operating System mac-os ### Versions of Apache Airflow Providers _No response_ ### Deployment Astronomer ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
