telac commented on a change in pull request #8868:
URL: https://github.com/apache/airflow/pull/8868#discussion_r627462732



##########
File path: airflow/providers/google/cloud/operators/bigquery.py
##########
@@ -546,6 +548,11 @@ def __init__(self,
                 "the gcp_conn_id parameter.", DeprecationWarning, stacklevel=3)
             gcp_conn_id = bigquery_conn_id
 
+        warnings.warn(

Review comment:
       @judoole @turbaszek This is something I've been wondering for a long 
while. It seems that `BigQueryExecuteQueryOperator` makes the simple case of 
writing a job that queries data from one table and inserts it onto another so 
much simpler than `BigQueryInsertJobOperator `. The fact that 
`BigQueryInsertJobOperator` doesn't provide destination table as a task level 
parameter, doesn't provide clustering fields as a task level parameter, doesn't 
provide time_partitioning as a task level parameter but instead forces you to 
construct your own config fi. Overall it just seems way less friendly to use 
than `BigQueryInsertJobOperator `.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to