mik-laj commented on a change in pull request #19518:
URL: https://github.com/apache/airflow/pull/19518#discussion_r749660793
##########
File path: docs/apache-airflow-providers-google/connections/gcp.rst
##########
@@ -160,10 +160,13 @@ access token, which will allow to act on its behalf using
its permissions. ``imp
does not even need to have a generated key.
.. warning::
-
:class:`~airflow.providers.google.cloud.operators.kubernetes_engine.GKEStartPodOperator`,
:class:`~airflow.providers.google.cloud.operators.dataflow.DataflowCreateJavaJobOperator`
and
:class:`~airflow.providers.google.cloud.operators.dataflow.DataflowCreatePythonJobOperator`
- do not support direct impersonation as of now.
+ do not support direct impersonation as of now. Both are deprecated and new
operators
+
:class:`~airflow.providers.apache.beam.operators.beam.BeamRunJavaPipelineOperator`
and
Review comment:
We should use the same credentials in Beam operators and Kubernetes
Engine Operator for submiting and monitoring tasks. I do not see the reason why
the pod/beam job should be created with the use of one credential and then
monitored with the use of another credentials. This likely won't work as each
credential has a different permission scope and a task created by one account
with one credential doesn't need to access the other account's data.
If we do not have any warnings and notifications, this is even more
problematic because the user is not aware that we have not used the credentials
selected by the user.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]