style95 commented on pull request #5194:
URL: https://github.com/apache/openwhisk/pull/5194#issuecomment-1019875232


   I quickly run benchmarks with the new scheduler and got the following 
results.
   
   ## Test Environments
   * 3 controllers(VM): 8 cores, 16GB memory
   * 3 schedulers(PM): 40 cores, 128GB memory
   * 10 invokers(VM): 8 cores, 16GB memory.
     * UserMemory: 10240MB
     * Total 400 containers when utilizing 256GB-memory containers
   
   
   I invoked a different number of actions as follow:
   
   ## 100 actions.
   
   
![image](https://user-images.githubusercontent.com/3447251/150751665-16c749b8-42a9-4662-b72e-b9aaf453c0c8.png)
   
   ## 1 actions.
   
![image](https://user-images.githubusercontent.com/3447251/150751750-4fe965c0-6b46-4ecc-8fc2-5c2c25bf46f9.png)
   
   There are some differences between the two.
   For the record, the upstream version is now different from the downstream 
version.
   It is using a different version of the Akka family and there is also a 
subtle difference in the code base too.
   
   In our downstream version, I could observe around 14000TPS in the same 
environment for both cases(1 action / 100 actions).
   
   In the upstream version, as you can see, it shows more TPS in the 100 
actions case while it shows poor performance in the 1 action case.
   In the 100 actions case, it utilized all containers, while only a few 
containers were utilized in the 1 action case.
   
   I feel there is still room for improvement in terms especially in terms of 
performance.
   But anyway, I could confirm it is working as expected with regards to 
functionalities.
   
   I would also run the same benchmark with the old scheduler and compare the 
performance.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to