chetanmeh commented on a change in pull request #2795: enable concurrent
activation processing
URL:
https://github.com/apache/incubator-openwhisk/pull/2795#discussion_r225466727
##########
File path: core/invoker/src/main/scala/whisk/core/invoker/InvokerReactive.scala
##########
@@ -101,15 +102,16 @@ class InvokerReactive(
private val topic = s"invoker${instance.toInt}"
private val maximumContainers = (poolConfig.userMemory /
MemoryLimit.minMemory).toInt
private val msgProvider = SpiLoader.get[MessagingProvider]
- private val consumer = msgProvider.getConsumer(
- config,
- topic,
- topic,
- maximumContainers,
- maxPollInterval = TimeLimit.MAX_DURATION + 1.minute)
+
+ //number of peeked messages - increasing the concurrentPeekFactor improves
concurrent usage, but adds risk for message loss in case of crash
+ private val maxPeek =
+ math.max(maximumContainers, (maximumContainers * limitsConfig.max *
poolConfig.concurrentPeekFactor).toInt)
Review comment:
In default setup with `limitsConfig.max` = 500 , `MemoryLimit.minMemory` of
128M, and peekFactor of 0.5 it becomes userMemory_in_MB * 2. So on a box with
userMemory=say 20GB, maxPeek = 20*1024*2 = 41000.
This would be a high number and if those activations are big then it can
bring down invoker. May be we have a lower default value for
`limitsConfig.max`. And later look into having a more streaming way of reading
activations from Kafka
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services