style95 commented on code in PR #5313:
URL: https://github.com/apache/openwhisk/pull/5313#discussion_r950628216
##########
core/scheduler/src/main/scala/org/apache/openwhisk/core/scheduler/container/ContainerManager.scala:
##########
@@ -157,43 +139,64 @@ class ContainerManager(jobManagerFactory: ActorRefFactory
=> ActorRef,
case _ =>
}
- private def createContainer(msgs: List[ContainerCreationMessage],
- memory: ByteSize,
- invocationNamespace: String): Unit = {
+ private def createContainer(msgs: List[ContainerCreationMessage], memory:
ByteSize, invocationNamespace: String)(
+ implicit logging: Logging): Unit = {
logging.info(this, s"received ${msgs.size} creation message
[${msgs.head.invocationNamespace}:${msgs.head.action}]")
- val coldCreations = filterWarmedCreations(msgs)
- if (coldCreations.nonEmpty)
- ContainerManager
- .getAvailableInvokers(etcdClient, memory, invocationNamespace)
- .flatMap { invokers =>
- if (invokers.isEmpty) {
- coldCreations.foreach { msg =>
- ContainerManager.sendState(
- FailedCreationJob(
- msg.creationId,
- msg.invocationNamespace,
- msg.action,
- msg.revision,
- NoAvailableInvokersError,
- s"No available invokers."))
- }
- Future.failed(NoCapacityException("No available invokers."))
- } else {
- coldCreations.foreach { msg =>
- creationJobManager ! RegisterCreationJob(msg)
- }
+ ContainerManager
+ .getAvailableInvokers(etcdClient, memory, invocationNamespace)
+ .foreach { invokers =>
+ if (invokers.isEmpty) {
+ msgs.foreach(ContainerManager.sendState(_, NoAvailableInvokersError,
s"No available invokers."))
+ } else {
+ val (coldCreations, warmedCreations) =
+ ContainerManager.filterWarmedCreations(warmedContainers,
inProgressWarmedContainers, invokers, msgs)
+
+ // handle warmed creation
+ val chosenInvokers: immutable.Seq[Option[(Int,
ContainerCreationMessage)]] = warmedCreations.map {
+ warmedCreation =>
+ // update the in-progress map for warmed containers
+ // even if it is done in the filterWarmedCreations method, it is
still necessary to apply the change to the original map
+
warmedCreation._3.foreach(inProgressWarmedContainers.update(warmedCreation._1.creationId.asString,
_))
+
+ // send creation message to the target invoker
+ warmedCreation._2 map { chosenInvoker =>
+ val msg = warmedCreation._1
+ creationJobManager ! RegisterCreationJob(msg)
+ sendCreationContainerToInvoker(messagingProducer,
chosenInvoker, msg)
+ (chosenInvoker, msg)
+ }
+ }
- Future {
- ContainerManager
- .schedule(invokers, coldCreations, memory)
- .map { pair =>
- sendCreationContainerToInvoker(messagingProducer,
pair.invokerId.toInt, pair.msg)
- }
+ val updatedInvokers = chosenInvokers.foldLeft(invokers) { (invokers,
chosenInvoker) =>
+ chosenInvoker match {
+ case Some((chosenInvoker, msg)) =>
+ updateInvokerMemory(chosenInvoker,
msg.whiskActionMetaData.limits.memory.megabytes, invokers)
Review Comment:
> becausee it would not consume memory.
First, it "would" consume memory.
When an invoker advertises its memory, it does not include the memory of
warmed containers.
So no matter how many warm containers are there, the invoker will advertise
full free memory if there is no running container.
Second, when we schedule containers, let's say we need to schedule 5
containers, if the first 3 are scheduled to some invokers, we need to consider
the scheduling result to reduce the invokers' free memory when scheduling the
rest 2 containers.
Since the `getAvailableInvokers` method is executed only once for each
container creation list, we need to schedule containers based on a snapshot of
invoker memory.
But still, it should be updated during scheduling. This is to guarantee at
least we don't send a container creation request to an invoker with not enough
resources based on a snapshot.
For the next container creation list, it will fetch the actual invoker
resources again and do the same.
And even if we here update the invoker resources, the actual invoker
resources are only updated by invokers and advertised via ETCD. We here just
update the snapshot of invoker resources for the subsequent scheduling.
As you said, there would be other schedulers, and they can also schedule
some containers to the same invoker.
But invokers are randomly selected, and it would rarely saturate an invoker
due to concurrent requests to the invoker. When there are not enough resources
in the whole cluster, that may happen frequently, but if such a situation
perpetuates, then that implies we need to scale out our cluster.
Also, even if an invoker without enough resources receives container
creation requests, it will return it to reschedule to another invoker.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]