Hello All,

I am reaching out to ask for your help in completing the integration test
suite for the Apache Airflow Task SDK Execution API. This is a great
opportunity to contribute
to Airflow's testing infrastructure while learning about the Task SDK and
Execution API. More details about this on the issue:
https://github.com/apache/airflow/issues/58178

*Why are we doing this?*

As outlined in our [Task SDK Integration Tests documentation](
https://github.com/apache/airflow/blob/main/contributing-docs/testing/task_sdk_integration_tests.rst
),
these integration tests serve several critical purposes:

*1. API Compatibility Assurance*
  The Task SDK communicates with Airflow through well defined APIs for task
execution. These tests ensure that changes to either the SDK or Airflow
core don't break
the interface contract between the two.

*2. Real World Scenario Testing*
  While unit tests verify individual components work correctly, integration
tests validate that the entire system works together as expected in
deployment scenarios.

*3. Quicker Resolution for Interface Incompatibility*
  These tests catch integration issues early in the development cycle,
preventing breaking changes from reaching a release.

*What has been done already:*

We've made significant progress setting up the test infrastructure and
implementing core tests, more details in the issue:
https://github.com/apache/airflow/issues/58178

*What needs to be done*

We have **stub test functions** ready for implementation across multiple
test files. Each stub includes:
- Clear docstring describing what to test
- Expected response type and endpoint information
- `@pytest.mark.skip` decorator marking it as TODO
- Placeholder for implementation

This is a great contribution opportunity because it offers a clear and
structured approach — each test stub comes with detailed documentation
explaining exactly what needs to
be tested. You can follow established test patterns without needing to
design new approaches, making it easy to get started. The work is
self-contained, meaning each test file
can be developed independently, allowing contributors to make meaningful
progress without dependencies. It’s also a valuable learning experience,
providing insight into the
Task SDK Execution API and how it works in practice. Most importantly,
these tests have real impact by helping prevent regressions and ensuring
long term API compatibility.


We're here to help! Don't hesitate to reach out if you have questions or
need clarification.

P.S: Will be creating a slack channel for this shortly, as communicated in
the GH issue.

Thanks & Regards,
Amogh Desai

Reply via email to