tjaeuth commented on issue #58893:
URL: https://github.com/apache/airflow/issues/58893#issuecomment-3596588746

   > [@tjaeuth](https://github.com/tjaeuth), what sort of testing are you 
interested in doing? Unit-testing the custom Operator? Testing the execution of 
just a single Task using that Operator? Testing an entire DAG end-to-end? There 
are tons of great examples of unit-testing Operators in the Airflow codebase 
itself. Check out this one for an S3 Operator!
   > 
   > 
[airflow/providers/amazon/src/airflow/providers/amazon/aws/operators/s3.py](https://github.com/apache/airflow/blob/8545d3c578af5e254e230e38f8a5035fac045342/providers/amazon/src/airflow/providers/amazon/aws/operators/s3.py#L259)
   > 
   > Line 259 in 
[8545d3c](/apache/airflow/commit/8545d3c578af5e254e230e38f8a5035fac045342)
   > 
   >  class S3CopyObjectOperator(AwsBaseOperator[S3Hook]): 
   > That being said, I definitely think that comprehensive unit-testing docs 
with more examples would be a great addition.
   
   @jroachgolf84 thank you for your response and the questions.
   
   I wanted to keep it simple like it is implemented for the S3Operator 
(https://github.com/apache/airflow/blob/main/providers/amazon/tests/unit/amazon/aws/operators/test_s3.py#L494)
   but I also wanted to be able to call `Connection.get()` which requires 
access the Airflow metadata database as I guess.
   
   I know I can always use mocking etc. to simplify the test setup. On the 
other side, the documentation clearly show a case to test an operator with 
`dag.test()`.
   
   So, testing the execution of just a single task using that operator would be 
great.
   Actually, putting the code from the documentation (referenced in my initial 
comment) into a test file that is executed with pytest would be great.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to