[
https://issues.apache.org/jira/browse/PIO-182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664982#comment-16664982
]
ASF GitHub Bot commented on PIO-182:
------------------------------------
longliveenduro commented on issue #482: [PIO-182] Add async methods to
LEventStore
URL: https://github.com/apache/predictionio/pull/482#issuecomment-433360671
@takezoe yes exactly, seperate Threadpools for CPU and I/O bound blocking
processing looks like a very good idea for a transition phase. Also looking at
the Blogs they suggest a fixed Threadpool instead of a ForkJoin Threadpool
(which the "standard" Scala thread pool is) for blocking I/O code.
Anyhow, looking at your PR, you are importing the standard Scala execution
context in the methods that still use blocking via Await.result. What about
using a separate Threadpool there whose size is either configurable or you pass
it as a parameter. Passing as a parameter though is breaking the current
contract and is a little bit strange since the method does not return an async
instance (e.g. Future) which I would expect from a scala method which needs an
ExecutionContext param.
Very interested what you think!
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Add asynchronous (non-blocking) methods to LEventStore
> ------------------------------------------------------
>
> Key: PIO-182
> URL: https://issues.apache.org/jira/browse/PIO-182
> Project: PredictionIO
> Issue Type: Improvement
> Components: Core
> Affects Versions: 0.13.0
> Reporter: Naoki Takezoe
> Assignee: Naoki Takezoe
> Priority: Major
>
> The current implementation of {{LEventStore}} has only synchronous (blocking)
> methods. Since they use {{ExecutionContext.Implicit.global}}, its parallelism
> is limited up to the number of processors. This means engine server's
> parallelism is also limited if we use these methods in prediction logic.
> To solve this problem, {{Future}} version of these methods should be added to
> {{LEventStore}} and also current blocking methods should be modified to take
> {{ExecutionContext}} (as an implicit parameter).
> See also:
> https://lists.apache.org/thread.html/f14e4f8f29410e4585b3d8e9f646b88293a605f4716d3c4d60771854@%3Cuser.predictionio.apache.org%3E
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)