style95 commented on code in PR #5348:
URL: https://github.com/apache/openwhisk/pull/5348#discussion_r1011720630
##########
core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/v2/ActivationClientProxy.scala:
##########
@@ -164,9 +166,12 @@ class ActivationClientProxy(
stay()
case _: ActionMismatch =>
- logging.error(this, s"[${containerId.asString}] action version does
not match: $action")
+ val errorMsg = s"[${containerId.asString}] action version does not
match: $action"
+ logging.error(this, errorMsg)
c.activationClient.close().andThen {
- case _ => self ! ClientClosed
+ case _ =>
+ context.parent ! FailureMessage(new RuntimeException(errorMsg))
Review Comment:
As `GracefulShutdown` is introduced in the ContainerProxy layer, generally a
clean-up process is initiated by ContainerProxy. ContainerProxy cleans up ETCD
data first and sends GracefulShutdown to ActivationClientPRoxy and waits for
the `ClientClosed` message.
But in some cases like this, there is a need for ActivationClientProxy to
initiate the clean-up process.
In these cases, ActivationClientProxy should let ContainerProxy(parent) know
the situation and let it start the clean-up process immediately while it also
shut down.
So now, we need to send a `FailureMessage` to the parent, then the parent
will clean up ETCD data and remove containers immediately by this kind of logic.
https://github.com/apache/openwhisk/pull/5348/files#diff-23fc1c1634cd8a2e99b4cfbf342527248a53f5987911af80ab5d910ce7864d70R368
##########
core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/v2/FunctionPullingContainerProxy.scala:
##########
@@ -518,6 +505,8 @@ class FunctionPullingContainerProxy(
data.action.fullyQualifiedName(withVersion = true),
data.action.rev,
Some(data.clientProxy))
+
+ case x: Event if x.event != PingCache => delay
Review Comment:
This will make sure `GracefulShutdown` is properly handled in all states.
##########
core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/v2/FunctionPullingContainerProxy.scala:
##########
@@ -272,8 +272,7 @@ class FunctionPullingContainerProxy(
job.rpcPort,
container.containerId)) match {
case Success(clientProxy) =>
- clientProxy ! StartClient
Review Comment:
Previously, this chaining of the future causes the timing issue.
As a result, we needed this case.
```scala
case Event(ClientCreationCompleted(proxy), _: NonexistentData) =>
akka.pattern.after(3.milliseconds, actorSystem.scheduler) {
self ! ClientCreationCompleted(proxy.orElse(Some(sender())))
Future.successful({})
}
```
https://github.com/apache/openwhisk/pull/5348/files#diff-23fc1c1634cd8a2e99b4cfbf342527248a53f5987911af80ab5d910ce7864d70L341
It had made the logic deterministic and less efficient, so I refactored it.
##########
core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/v2/FunctionPullingContainerProxy.scala:
##########
@@ -334,41 +335,27 @@ class FunctionPullingContainerProxy(
when(CreatingClient) {
// wait for client creation when cold start
- case Event(job: ContainerCreatedData, _: NonexistentData) =>
- stay() using job
+ case Event(job: InitializedData, _) =>
Review Comment:
Now, it sends the `StartClient` to the ActivationClientProxy only after it
receives the `InitializedData`.
So there would be no timing issue.
##########
core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/v2/ActivationClientProxy.scala:
##########
@@ -36,7 +36,7 @@ import scala.concurrent.Future
import scala.util.{Success, Try}
// Event send by the actor
-case class ClientCreationCompleted(client: Option[ActorRef] = None)
+case object ClientCreationCompleted
Review Comment:
As we store the reference of a client proxy after creation, we don't need to
pass/receive the client-proxy reference via this message. So it became an
object.
https://github.com/apache/openwhisk/pull/5348/files#diff-23fc1c1634cd8a2e99b4cfbf342527248a53f5987911af80ab5d910ce7864d70R315
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]