Hi Ray,

In my research, these parameters are also heuristicaly found. Basically we
tested our framework on hpx for-each using different selected chunk sizes
each time. These loops had different parameters ( static and dynamic) which
reacted differently for those chunk size candidates. Then, we determined
which chunk size resulted in better performance on each of those loops.
That’s how we collected our training data, which we trained our model using
them. You can find training data in HPXML on hpx GitHub.

Thanks,
Zahra,

On Wed, Feb 21, 2018 at 4:21 AM 김규래 <[email protected]> wrote:

> Hi Zahra,
> I've read your amazong paper for quite a while.
> There's one thing I cannot find answers.
>
> What were the label data that the models were trained on?
> I cannot find explanation about how 'optimal chunk size' and 'optimal
> prefetching distance' labels were collected.
>
> Previous work mostly states heuristically found labels.
> In the case of your paper, how was this done?
>
> My respects.
> msca8h at naver dot com
> msca8h at sogang dot ac dot kr
> Ray Kim
>
-- 
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
_______________________________________________
hpx-users mailing list
[email protected]
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to