Hi,

as written in a previous thread, I am trying to build a system where users
submit code and this code is run in a Docker container. I want to keep
latency low, so the time between "user submits request" and "task is
submitted" should be short.
First I tried to (a) buffer arriving tasks in an internal queue and
whenever `resourceOffers()` was called, process that queue. All not-used
offers were declined. That worked, but the time between resource offerings
was rather long, so that I tried to find a different way.
The next try (b) was to instead buffer the offers I received and process
new requests immediately from that buffer. However, the drawbacks were that
1) no other frameworks received those resource offers while they are in my
buffer, and 2) when other frameworks finished a task, a lot of resource
offers for the same slave piled up (like "0.5 CPUs and 500 MB memory", then
"0.8 CPUs and 0 MB memory" etc.).
I guess once I have understood these mechanisms I can work around them in
my scheduler and do *something* (like "buffer two resource offers and
decline the rest" or so), but is there any best practice approach for that?

Thanks,
Tobias

Reply via email to