[ 
https://issues.apache.org/jira/browse/PIO-182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16663444#comment-16663444
 ] 

ASF GitHub Bot commented on PIO-182:
------------------------------------

longliveenduro commented on issue #482: [PIO-182] Add async methods to 
LEventStore
URL: https://github.com/apache/predictionio/pull/482#issuecomment-432965202
 
 
   @takezoe The scaladoc is clear I think. 
   
   But after some days of letting this settle and also reading this excellent 
blog: 
https://www.beyondthelines.net/computing/scala-future-and-execution-context/
   
   I am not sure if it is really a good idea to use the "standard" Scala 
Execution Context for the "old blocking" code. Another idea would we (like 
proposed in the Blog) use a separate Thread pool, make the defaults the same as 
now (or maybe raise the allowed thread count to bypass/hack the problem from 
now on a bit for usages of the old code) and make the size of the thread pool 
configurable like the standard scala thread pool.
   
   For example this blog suggests that a FixedThreadPool for blocking purposes 
might be a good idea:
   
https://www.cakesolutions.net/teamblogs/demystifying-the-blocking-construct-in-scala-futures

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add asynchronous (non-blocking) methods to LEventStore
> ------------------------------------------------------
>
>                 Key: PIO-182
>                 URL: https://issues.apache.org/jira/browse/PIO-182
>             Project: PredictionIO
>          Issue Type: Improvement
>          Components: Core
>    Affects Versions: 0.13.0
>            Reporter: Naoki Takezoe
>            Assignee: Naoki Takezoe
>            Priority: Major
>
> The current implementation of {{LEventStore}} has only synchronous (blocking) 
> methods. Since they use {{ExecutionContext.Implicit.global}}, its parallelism 
> is limited up to the number of processors. This means engine server's 
> parallelism is also limited if we use these methods in prediction logic.
> To solve this problem, {{Future}} version of these methods should be added to 
> {{LEventStore}} and also current blocking methods should be modified to take 
> {{ExecutionContext}} (as an implicit parameter).
> See also: 
> https://lists.apache.org/thread.html/f14e4f8f29410e4585b3d8e9f646b88293a605f4716d3c4d60771854@%3Cuser.predictionio.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to