GitHub user dosubot[bot] added a comment to the discussion: Need help setting up Superset chart in AWS EKS behind ALB
<!-- Answer --> That error means Superset’s SQLAlchemy connection pool is exhausted—by default, each pod allows 5 concurrent DB connections with up to 10 overflow, and new requests wait up to 30 seconds before failing. This is common in Kubernetes/EKS when you have multiple pods, slow DB responses, or long-running queries. Each pod has its own pool, so total DB connections = (number of pods) × (pool_size + max_overflow) [details](https://github.com/apache/superset/discussions/35997). To fix this, increase the pool size and overflow in your configOverrides. Add to your `superset_config.py` (or Helm `configOverrides`): ```python SQLALCHEMY_ENGINE_OPTIONS = { "pool_size": 20, # or higher, depending on DB capacity "max_overflow": 30, # or higher "pool_timeout": 60, # seconds to wait for a connection } ``` Monitor your DB to ensure it can handle the total connections. Also, check for slow or stuck queries in your DB logs, and consider enabling async query execution with Celery to offload long-running queries [example](https://github.com/apache/superset/issues/33218). If you have multiple pods, make sure your DB’s max_connections is high enough for all pods combined. After updating, restart your Superset pods to apply the changes. For long queries, also increase these timeouts in your configOverrides: ```python SUPERSET_WEBSERVER_TIMEOUT = 1800 SQLLAB_TIMEOUT = 1800 ``` And set matching timeouts in your ALB/Ingress annotations. If you still see pool exhaustion, check for session/transaction leaks or DB resource bottlenecks. <!-- Dosu Comment Footer --> *To reply, just mention [@dosu](https://go.dosu.dev/dosubot).* --- <sup>How did I do? [Good](https://app.dosu.dev/response-feedback/8f4e22d9-511a-407a-8fdb-8dcaa5ebe6b7?feedback_type=great_response) | [Irrelevant](https://app.dosu.dev/response-feedback/8f4e22d9-511a-407a-8fdb-8dcaa5ebe6b7?feedback_type=irrelevant_answer) | [Incorrect](https://app.dosu.dev/response-feedback/8f4e22d9-511a-407a-8fdb-8dcaa5ebe6b7?feedback_type=incorrect_sources) | [Verbose](https://app.dosu.dev/response-feedback/8f4e22d9-511a-407a-8fdb-8dcaa5ebe6b7?feedback_type=too_verbose) | [Hallucination](https://app.dosu.dev/response-feedback/8f4e22d9-511a-407a-8fdb-8dcaa5ebe6b7?feedback_type=hallucination) | [Report 🐛](https://app.dosu.dev/response-feedback/8f4e22d9-511a-407a-8fdb-8dcaa5ebe6b7?feedback_type=bug_report) | [Other](https://app.dosu.dev/response-feedback/8f4e22d9-511a-407a-8fdb-8dcaa5ebe6b7?feedback_type=other)</sup> [](https://app.dosu.dev/a28d3c7e-a9d3-459e-9fb6-3a6f9ff4f357/ask?utm_source=github) [](https://cursor.com/link/prompt?text=That%20error%20means%20Superset%E2%80%99s%20SQLAlchemy%20connection%20pool%20is%20exhausted%E2%80%94by%20default%2C%20each%20pod%20allows%205%20concurrent%20DB%20connections%20with%20up%20to%2010%20overflow%2C%20and%20new%20requests%20wait%20up%20to%2030%20seconds%20before%20failing.%20This%20is%20common%20in%20Kubernetes/EKS%20when%20you%20have%20multiple%20pods%2C%20slow%20DB%20responses%2C%20or%20long-running%20queries.%20Each%20pod%20has%20its%20own%20pool%2C%20so%20total%20DB%20connections%20%3D%20%28number%20of%20pods%29%20%C3%97%20%28pool_size%20%2B%20max_overflow%29%C2%A0%5Bdetails%5D%28https%3A//github.com/apache/superset/discussions/35997%29.%0A%0ATo%20fix%20this%2C%20increase%20the%20pool%20size%20and%20overflow%20in%20your%20configOverride s.%20Add%20to%20your%20%60superset_config.py%60%20%28or%20Helm%20%60configOverrides%60%29%3A%0A%0A%60%60%60python%0ASQLALCHEMY_ENGINE_OPTIONS%20%3D%20%7B%0A%20%20%20%20%22pool_size%22%3A%2020%2C%20%20%20%20%20%20%20%20%20%23%20or%20higher%2C%20depending%20on%20DB%20capacity%0A%20%20%20%20%22max_overflow%22%3A%2030%2C%20%20%20%20%20%20%23%20or%20higher%0A%20%20%20%20%22pool_timeout%22%3A%2060%2C%20%20%20%20%20%20%23%20seconds%20to%20wait%20for%20a%20connection%0A%7D%0A%60%60%60%0A%0AMonitor%20your%20DB%20to%20ensure%20it%20can%20handle%20the%20total%20connections.%20Also%2C%20check%20for%20slow%20or%20stuck%20queries%20in%20your%20DB%20logs%2C%20and%20consider%20enabling%20async%20query%20execution%20with%20Celery%20to%20offload%20long-running%20queries%C2%A0%5Bexample%5D%28https%3A//github.com/apache/superset/issues/33218%29.%0A%0AIf%20you%20have%20multiple%20pods%2C%20make%20sure%20your%20DB%E2%80%99s%20max_connections%20is%20high%20enough%20for%20all%20pods%20combined.%20After%20u pdating%2C%20restart%20your%20Superset%20pods%20to%20apply%20the%20changes.%0A%0AFor%20long%20queries%2C%20also%20increase%20these%20timeouts%20in%20your%20configOverrides%3A%0A%0A%60%60%60python%0ASUPERSET_WEBSERVER_TIMEOUT%20%3D%201800%0ASQLLAB_TIMEOUT%20%3D%201800%0A%60%60%60%0A%0AAnd%20set%20matching%20timeouts%20in%20your%20ALB/Ingress%20annotations.%0A%0AIf%20you%20still%20see%20pool%20exhaustion%2C%20check%20for%20session/transaction%20leaks%20or%20DB%20resource%20bottlenecks.) [](https://go.dosu.dev/discord-bot) [](https://twitter.com/intent/tweet?text=%40dosu_ai%20helped%20me%20solve%20this%20issue!&url=https%3A//github.com/apache/superset/discussions/36296) GitHub link: https://github.com/apache/superset/discussions/36296#discussioncomment-15128167 ---- This is an automatically sent email for [email protected]. To unsubscribe, please send an email to: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
