abhishekbhakat opened a new issue, #49607: URL: https://github.com/apache/airflow/issues/49607
### Apache Airflow version 3.0.0 ### If "Other Airflow 2 version" selected, which one? _No response_ ### What happened? When using the OpenAPI spec provided by Apache Airflow, tools that validate or consume the spec (such as `openapi-core`, `openapi-spec-validator`, and code generators) fail with a `DuplicateOperationIDError`. The error message states that `the operationId 'get_task_instance_dependencies' is not unique`, even though the routes for these operations are different. This prevents automated tooling, client generation, and some integrations from working with the Airflow OpenAPI spec. ### What you think should happen instead? According to the OpenAPI specification, every operationId must be unique across the entire API, regardless of the route or HTTP method. The Airflow OpenAPI spec should ensure that all operationId values are unique. ### How to reproduce 1. Download the OpenAPI spec (openapi.json) from a running Airflow instance or from the Airflow source.(`http://localhost:8080/openapi.json`) 2. Run any OpenAPI validator or code generator, such as: ```python from openapi_spec_validator import validate_spec import json with open("path/to/openapi.json") as f: validate_spec(json.load(f)) ``` ### Operating System ProductName: macOS ProductVersion: 15.4.1 BuildVersion: 24E263 ### Versions of Apache Airflow Providers Not Applicable ### Deployment Virtualenv installation ### Deployment details _No response_ ### Anything else? _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
