Thanks Dominic!

Yep, that's exactly the thought.

Towards your questions:

# 1. How do loadbalancers keep the state:

They stay as they are. The Semaphores today have the cpu-share based calculated 
slots and will have the memory based slots in the future. No change needed 
there in my opinion.

# 2. How are slots shared among loadbalancers:

Same answer as above: Like today! In your example, each loadbalancer will have 
16 slots to give away (assuming 2 controllers). This has a wrinkle in that the 
maximum possible memory size must be proportional to the amount of 
loadbalancers in the system. For a first step, this might be fine. In the 
future we need to implement vertical sharding where the loadbalancers divide 
the invoker pool to make bigger memory sizes possible again. Good one!

Another wrinkle here is, that with an increasing amount of loadbalancers 
fragmentation gets worse. Again, I think for now this is acceptable in that the 
recommendation on the amount of controllers is rather small today.

# 3. Throttling mechanism:

Very good one, I missed that in my initial proposal: Today, we limit the number 
of concurrent activations, or differently phrased the number of slots occupied 
at any point in time. Likewise, the throttling can change to stay "number of 
slots occupied at any point in time" and will effectively limit the amount of 
memory a user can consume in the system, i.e. if a user has 1000 slots free, 
she can have 250x 512MB activations running, or 500x 256MB activations (or any 
mixture of course).

It's important that we provide a migration path though as this will change the 
behavior in production systems. We could make the throttling strategy 
configurable and decide between "maximumConcurrentActivations", which ignores 
the weight of an action and behaves just like today and "memoryAwareWeights" 
which is the described new way of throttling.

Cheers,
Markus

Reply via email to