migesok commented on code in PR #485:
URL: https://github.com/apache/incubator-pekko/pull/485#discussion_r1265117570


##########
actor/src/main/resources/reference.conf:
##########
@@ -482,6 +482,10 @@ pekko {
         # Setting to "FIFO" to use queue like peeking mode which "poll" or 
"LIFO" to use stack
         # like peeking mode which "pop".
         task-peeking-mode = "FIFO"
+
+        # This config is new in Pekko v1.1.0 and only has an effect if you are 
running with JDK 9 and above.
+        # Read the documentation on `java.util.concurrent.ForkJoinPool` to 
find out more. Default in hex is 0x7fff.
+        maximum-pool-size = 32767

Review Comment:
   May be it makes sense to make to turn around the configuration for FJP and 
make it similar to how the default global FJP in JDK is configured:
   - "parallelism" calculation could stay the same, be derived from runtime CPU 
count
   - add a new parameter called something like `max-spare-threads` (similar to 
"java.util.concurrent.ForkJoinPool.common.maximumSpares" system property) with 
256 as the default value (again the same as the default for the global FJP)
   - the resulting maximum pool size will be calculated as parallelism + 
max-spare-threads
   
   Additionally I think another parameter needs to be exposed - 
`reject-blocking-if-pool-saturated` or something similar which governs the 
`saturate` predicate in the new FJP constructor - the user should have a choice 
between failing on blocking task once the pool limit is reached or keep piling 
the tasks in memory.
   
   Does it make sense?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to