GitHub user ertancelik created a discussion: [Provider] LLM-powered smart retry operator using local Ollama
## Hey Airflow Community! 👋 I built a new provider that uses a local LLM (via Ollama) to make intelligent retry decisions when tasks fail. ### The Problem Airflow's retry mechanism is static — same wait time, same retry count regardless of *why* the task failed. ### The Solution `LLMSmartRetryOperator` classifies the error and applies the right strategy: | Error Type | Strategy | |---|---| | `rate_limit` | Wait 60s, retry 5x | | `network` | Wait 15s, retry 4x | | `auth` | Fail immediately ✗ | | `data_schema` | Fail immediately ✗ | ### Privacy First 🔒 100% local inference via Ollama — no data leaves your infrastructure. ### Links - GitHub: https://github.com/ertancelik/airflow-provider-smart-retry - PyPI: https://pypi.org/project/airflow-provider-smart-retry/ Would love feedback from the community! 🙏 GitHub link: https://github.com/apache/airflow/discussions/64365 ---- This is an automatically sent email for [email protected]. To unsubscribe, please send an email to: [email protected]
