Whatsonyourmind commented on issue #35803:
URL: https://github.com/apache/airflow/issues/35803#issuecomment-4181009080

   This would be a huge improvement. We have ETL tasks where the same operator 
processes anywhere from 500 rows to 50M rows depending on the partition, and 
statically setting pool_slots to 3 either starves the small partitions (wasting 
slots) or under-provisions the large ones (causing OOMs).
   
   As a workaround until dynamic pool_slots land natively, we use a constraint 
optimizer in an upstream task to compute the optimal slot allocation across the 
full DAG run based on estimated workload sizes:
   
   ```bash
   # Estimate resource needs for each task in the current DAG run
   curl -X POST https://oraclaw-api.onrender.com/api/v1/solve/constraints \
     -H "Content-Type: application/json" \
     -d '{
       "objective": "minimize_makespan",
       "variables": [
         {"name": "load_us_events", "duration": 180, "memory_gb": 12, "rows": 
48000000},
         {"name": "load_eu_events", "duration": 45, "memory_gb": 3, "rows": 
5200000},
         {"name": "load_ap_events", "duration": 20, "memory_gb": 1, "rows": 
890000},
         {"name": "transform_all", "duration": 300, "memory_gb": 16, 
"dependencies": ["load_us_events", "load_eu_events", "load_ap_events"]}
       ],
       "constraints": [
         {"type": "resource", "name": "pool_slots", "capacity": 8},
         {"type": "resource", "name": "memory_gb", "capacity": 32}
       ]
     }'
   ```
   
   Returns optimal slot allocation per task:
   
   ```json
   {
     "schedule": [
       {"name": "load_us_events", "slots": 4, "start": 0, "reason": "48M rows, 
memory-heavy"},
       {"name": "load_eu_events", "slots": 1, "start": 0, "reason": "5.2M rows, 
light"},
       {"name": "load_ap_events", "slots": 1, "start": 0, "reason": "890K rows, 
minimal"},
       {"name": "transform_all", "slots": 6, "start": 180, "reason": "CPU-bound 
aggregation"}
     ],
     "makespan": 480,
     "peakMemory": 28.5,
     "utilization": 0.84
   }
   ```
   
   We then use `task.override(pool_slots=computed_value)()` with the result 
from the solver. It's essentially the pattern from the issue description, but 
the cost estimation is done by the optimizer rather than hand-coded thresholds. 
The solver accounts for the full DAG shape -- it knows not to give 6 slots to a 
task that blocks nothing.
   
   ~15ms per solve call. Free 25/day at 
[oraclaw-api.onrender.com](https://oraclaw-api.onrender.com), $9/mo for 10K.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to