Hi Naoki,

nice, ok I would base my work on your PR as well, so we don't need to do
the same work twice. Does your PR gets merged into develop soon?

Cheers
Chris

On Tue, 16 Oct 2018 at 09:18, Naoki Takezoe <take...@gmail.com> wrote:

> Hi Chris,
>
> Oh, great. My plan was only adding async methods to LEventStore. If
> you can work for other parts, I will update my pull request to just
> describing about the default global ExecutionContext and wait for your
> work.
> On Tue, Oct 16, 2018 at 4:01 PM Chris Wewerka <chris.wewe...@gmail.com>
> wrote:
> >
> > Hi Naoki,
> >
> > thanks, that looks good. Will you continue with the other stores /
> storage types and also introduce async methods to the
> Algo.predict/predictBase and then the QueryServer? Just asking because I
> started yesterday to have a look around in the Query Server/ Algo. area
> >
> > Cheers
> > Chris
> >
> > On Tue, 16 Oct 2018 at 03:43, Naoki Takezoe <take...@gmail.com> wrote:
> >>
> >> Hi Chris,
> >>
> >> Does this pull request work for you?
> >> https://github.com/apache/predictionio/pull/482
> >> On Sat, Oct 13, 2018 at 1:11 AM Naoki Takezoe <take...@gmail.com>
> wrote:
> >> >
> >> > I think the point is that LEventStore doesn't have asynchronous
> >> > methods. We should add methods which return Future to LEventStore and
> >> > modify current blocking methods to take ExecutionContext. I created
> >> > JIRA ticket for that:
> >> > https://jira.apache.org/jira/browse/PIO-182
> >> >
> >> > On the other hand, it makes sense to describe in the documentation. At
> >> > least, we should describe that LEventStore uses the default global
> >> > ExecutionContext and how to configure it if we keep existing blocking
> >> > methods.
> >> >
> >> > 2018年10月12日(金) 16:06 Chris Wewerka <chris.wewe...@gmail.com>:
> >> > >
> >> > > Hi Donald,
> >> > >
> >> > > thanks for your answer and the hint to base off from Naoki's Akka
> Http thread. Saw the PR and had the same idea already, as it does not make
> sense to base off the old spray code. I worked with spray a couple of years
> ago and back then it already had full support for Scala Futures / Fully
> async programming. If I get the time I will start with a fork going off
> Naoki's Akka HTTP branch.
> >> > >
> >> > > Please have a look at my second mail also, as the usage of the
> bounded "standard" Scala Execution context has a dramatic impact of how the
> machines resources are leveraged. On our small "All in one" machine we
> didn't see much CPU / Load until yesterday when I set the mentioned params
> to allow much higher thread counts in the standard Scala Execution context.
> We have proven this in our small production environment and it has an huge
> impact. In fact the Query Server acted like a water dam, not letting enough
> requests in the system to use all of it's resources. You might consider
> adding this to the documentation, until I hopefully come up with a PR for
> full async engine.
> >> > >
> >> > > Cheers
> >> > > Chris
> >> > >
> >> > > On Fri, 12 Oct 2018 at 02:18, Donald Szeto <don...@apache.org>
> wrote:
> >> > >>
> >> > >> Hi Chris,
> >> > >>
> >> > >> It is indeed a good idea to create asynchronous versions of the
> engine server! Naoki has recently completed the migration from spray to
> Akka HTTP so you may want to base off from that instead. Let us know if we
> can help in any way.
> >> > >>
> >> > >> I do not recall the exact reason anymore, but engine server was
> created almost 5 years ago, and I don’t remember whether spray could take
> futures natively as responses like Akka HTTP could now. Nowadays there
> shouldn’t be any reason to not provide asynchronous flavors of these APIs.
> >> > >>
> >> > >> Regards,
> >> > >> Donald
> >> > >>
> >> > >> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <take...@gmail.com>
> wrote:
> >> > >>>
> >> > >>> Hi Chris,
> >> > >>>
> >> > >>> I think current LEventStore's blocking methods should take
> ExecutionContext as an implicit parameter and Future version of methods
> should be provided. I don't know why they aren't. Is there anyone who knows
> reason for the current LEventStore API?
> >> > >>>
> >> > >>> At the moment, you can consider to use LEvent directly to access
> Future version of methods as a workaround.
> >> > >>>
> >> > >>> 2018年10月11日(木) 23:05 Chris Wewerka <chris.wewe...@gmail.com>:
> >> > >>>
> >> > >>> >
> >> > >>> > Thanks George, good to hear that!
> >> > >>> >
> >> > >>> > Today I did a test by raising the bar for the max allowed
> threads in the "standard"
> >> > >>> >
> >> > >>> > scala.concurrent.ExecutionContext.Implicits.global
> >> > >>> >
> >> > >>> > I did this before calling "pio deploy" by adding
> >> > >>> >
> >> > >>> > export JAVA_OPTS="$JAVA_OPTS
> -Dscala.concurrent.context.numThreads=1000
> -Dscala.concurrent.context.maxThreads=1000"
> >> > >>> >
> >> > >>> > Now we do see much more CPU usage by elasticsearch. So it seems
> that the QueryServer by using the standard thread pool bounded to the
> available processors acted like a dam.
> >> > >>> >
> >> > >>> > By setting the above values we now have sth. like a traditional
> Java JEE or Spring application which blocks thread because of synchronous
> calls and creates new threads if there is demand (requests) for it.
> >> > >>> >
> >> > >>> > So this is far from being a good solution. Going full
> async/reactive is still the way to go in my opinion.
> >> > >>> >
> >> > >>> > Cheers
> >> > >>> > Chris
> >> > >>> >
> >> > >>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <
> gyar...@griddynamics.com> wrote:
> >> > >>> >>
> >> > >>> >>
> >> > >>> >> Hi Chris,
> >> > >>> >>
> >> > >>> >> I'm not a contributor of the predictionio. But want to mention
> we also quite interested in that changes in my company.
> >> > >>> >> We often develop some custom pio engines, and it doesn't look
> right to me to use Await.result with non-blocking api.
> >> > >>> >> Totally agree with your point.
> >> > >>> >> Thanks for the question!
> >> > >>> >>
> >> > >>> >> George
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>> --
> >> > >>> Naoki Takezoe
> >> >
> >> >
> >> >
> >> > --
> >> > Naoki Takezoe
> >>
> >>
> >>
> >> --
> >> Naoki Takezoe
>
>
>
> --
> Naoki Takezoe
>

Reply via email to