[hpx-users] Full Time Positions at Oracle

2018-11-19 Thread Zahra Khatami
Hello HPX users!

There are several full time open positions at database group at Oracle (San
Francisco, Bay Area) for students graduating Summer 2019. If you are
interested, please send me your resume.

Best Regards,

*Zahra Khatami* | Member of Technical Staff
Virtual OS
Oracle
400 Oracle Parkway
Redwood City, CA 94065
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX Smart executors for_each and for_loop

2018-03-15 Thread Zahra Khatami
For HPX loop we have HPX::for_each , so by using namespace HPX in your
code, you can just simply use for_each, right?

Zahra

On Thu, Mar 15, 2018 at 8:39 AM Patrick Diehl <patrickdie...@gmail.com>
wrote:

> Sometimes the documentation is not aligned with the current version of
> the code and things are missing there.
>
> On 14/03/18 08:26 PM, Gabriel Laberge wrote:
> > Also I wonder, why the smart executors make_prefetcher_policy and
> > adaptive_chunk_size are not present on the list of execution policies
> > [1]
> https://stellar-group.github.io/hpx/docs/html/hpx/manual/parallel/executor_parameters.html
> >
> > Gabriel Laberge <gabriel.labe...@polymtl.ca> a écrit :
> >
> >> Ok thank you very much.
> >> But I'm wondering why for_each loops are used in the article?
> >> When code is shown in the article smart executors are used as
> >> execution policies in for_each loops.
> >> Is that something that was previously implemented?
> >> Zahra Khatami <z.khatam...@gmail.com> a écrit :
> >>
> >>> Hi Gabriel,
> >>>
> >>> They are tested on HPX for loop.
> >>>
> >>> Zahra
> >>>
> >>> On Tue, Mar 13, 2018 at 9:27 AM Gabriel Laberge <
> gabriel.labe...@polymtl.ca>
> >>> wrote:
> >>>
> >>>> Hi,
> >>>> in the article [0]
> http://stellar.cct.lsu.edu/pubs/khatami_espm2_2017.pdf
> >>>> smart executors adaptive_chunk_size and make_prefetcher_distance are
> >>>> tested as execution policies on for_each loops. I was wondering if
> >>>> they also work as execution policies of HPX's for_loop.
> >>>>
> >>>> Thank you.
> >>>> Gabriel.
> >>>>
> >>>>
> >>>>
> >>>> ___
> >>>> hpx-users mailing list
> >>>> hpx-users@stellar.cct.lsu.edu
> >>>> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
> >>>>
> >>> --
> >>> Best Regards,
> >>>
> >>> *Zahra Khatami* | Member of Technical Staff
> >>> Virtual OS
> >>> Oracle
> >>> 400 Oracle Parkway
> >>> Redwood City, CA 94065
> >>
> >>
> >>
> >> ___
> >> hpx-users mailing list
> >> hpx-users@stellar.cct.lsu.edu
> >> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
> >
> >
> >
> > ___
> > hpx-users mailing list
> > hpx-users@stellar.cct.lsu.edu
> > https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
> >
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>
-- 
Best Regards,

*Zahra Khatami* | Member of Technical Staff
Virtual OS
Oracle
400 Oracle Parkway
Redwood City, CA 94065
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX Smart executors for_each and for_loop

2018-03-13 Thread Zahra Khatami
Hi Gabriel,

They are tested on HPX for loop.

Zahra

On Tue, Mar 13, 2018 at 9:27 AM Gabriel Laberge <gabriel.labe...@polymtl.ca>
wrote:

> Hi,
> in the article [0] http://stellar.cct.lsu.edu/pubs/khatami_espm2_2017.pdf
> smart executors adaptive_chunk_size and make_prefetcher_distance are
> tested as execution policies on for_each loops. I was wondering if
> they also work as execution policies of HPX's for_loop.
>
> Thank you.
> Gabriel.
>
>
>
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>
-- 
Best Regards,

*Zahra Khatami* | Member of Technical Staff
Virtual OS
Oracle
400 Oracle Parkway
Redwood City, CA 94065
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Continuous regressions instead of multi-class classification

2018-03-06 Thread Zahra Khatami
Hi Gabriel,

Sorry for my late responses, as I am not at university anymore and I am
working at Oracle so I am kind of busy here with my ongoing projects. But I
would like to help you understand the concepts as much as possible. However
you could also count on Patrick and also Dr. Kaiser if you found me not
easy to find ;)
About your question, yes you can choose different candidates values and
even different amounts of candidates. But they should be chosen wisely. You
can do tests on your chosen candidates to see how your learning model acts
using them. If random candidates are chosen, your model will not result as
expected!
You are also free to choose different learning model.
However our main focus in this project is to use online machine learning
methods. Such that it will not required to collect offline data anymore. So
the system learns as it receives new data through time. But for the first
phase of this project please continue working and studying different
learning models and may trying with different candidates.
If it’s possible and you have time, you can also start writing your
proposal, so we could finalize it together. It’s the best way to clarify
the steps of this project.

Thanks,
Zahra

On Fri, Mar 2, 2018 at 4:46 PM Gabriel Laberge <gabriel.labe...@polymtl.ca>
wrote:

> Hi,
> I ask you many question as of recently but that's because I'm really
> getting invested in the project.
>
>
>So,the multinomial regressions used to find the optimal chunk
> size and prefetching distance is in actuality a multi-class
> classification algorithm.But, I wanted to ask you if chunk size and
> prefetching distance could be interpreted as continuous variables. For
> example, your regression can only return chunk size of 0.1% 1% 10% and
> 50%. But do chunk sizes of 34%,23% or 56% make sense? I'm not sure.
> Also for prefetching distance, the values used are 10 50 100 1000
> 5000. But would values of 543,23 or 4851 also make sense?
>
>If yes, I believe those variables could be treated as 'almost'
> continuous variables so a regression algorithm could be used instead
> of a classification one. This would allow to get more precise values
> since the regression could (in the case of prefetching distance)
> output any integer between 0 and 5000. Such a regression could be
> compared to the multinomial regression model in order to find out if
> such a precision is really required.
>
> Thank you very much.
> Gabriel.
>
>
> --
Best Regards,

*Zahra Khatami* | Member of Technical Staff
Virtual OS
Oracle
400 Oracle Parkway
Redwood City, CA 94065
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX smart executors questions

2018-02-21 Thread Zahra Khatami
Hi Gabriel,

Thanks for your interest in this project.
Logistic regression model was chosen since it was implemented in similar
projects before. But this project should easily work with other learning
models too. Binary regression model was chosen for selecting optimum
policy, since the Target was to chose earthier sequential or parallel as a
policy (0 or 1 -> binary) . For chunk sizes or prefetching distance, the
optimum parameter was chosen between more than two candidates, that’s why
we used multinomial regression model.
About your last question, do you mean using one training data and one
training model to choose chunk sizes, prefetching distances and policies
together? I don’t think that’s a good idea, since each of them has
different candidates and needs totally different training data.

Thanks,
Zahra

On Mon, Feb 19, 2018 at 3:44 PM Gabriel Laberge <gabriel.labe...@polymtl.ca>
wrote:

>
> Hi
> I'm Gabriel Laberge and i'm interested in doing the ""Applying Machine
> Learning Techniques on HPX Parallel Algorithms"" project. I'm quite
> new to machine learning but I expect to learn a lot during the
> project.  I had a few questions to ask you about the HPX smart
> executors from reading the article.
>
>
> First of, Why were logistic regression chosen over other method that
> you cited in the article (NN,SVM and decision tree). Would it be
> possible to implement those methods in one compilation?
>
> Secondly, I was wondering why you used a binary regression to chose
> between sequential and parrallel algorithms and you used Multinomial
> regression to choose the chuck size and prefetching distance. Would
> there be a possibility to use only one regression to choose all 3
> parameters?
>
> Thank you for your time.
> Gabriel.
>
> --
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Generating Data for HPX smart executors

2018-02-21 Thread Zahra Khatami
Gabriel,

I am not sure if I understand your concern correctly. The optimal
parameters ( chunk size, preferching distance or policies) shouldn’t be
found before training data. They are found for each of the hpx loops at
runtime based on the loop static and dynamic parameters. That’s a main goal
of this research. The candidates of these optimal parameters are chosen
when training model. Then the optimal one will be selected between them at
runtime, which may be different for each loop with different parameters.

Thanks,
Zahra,

On Tue, Feb 20, 2018 at 7:51 AM Gabriel Laberge <gabriel.labe...@polymtl.ca>
wrote:

> Hi,
> I had a questions on the way data was generated in order to train the
> logistics regressions models talked about in [0]
> https://arxiv.org/pdf/1711.01519.pdf
> For each of the training examples, the optimal execution
> policies,chunk sizes and prefetching distance had to be found before
> the training process in order to have good data. I wonder if the
> optimal parameters for the training examples were found by trial and
> error or if there is another technique.
> Thank you..
>
>
>
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>
-- 
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSOC 2018, question about "smart executors" paper

2018-02-21 Thread Zahra Khatami
Hi Ray,

In my research, these parameters are also heuristicaly found. Basically we
tested our framework on hpx for-each using different selected chunk sizes
each time. These loops had different parameters ( static and dynamic) which
reacted differently for those chunk size candidates. Then, we determined
which chunk size resulted in better performance on each of those loops.
That’s how we collected our training data, which we trained our model using
them. You can find training data in HPXML on hpx GitHub.

Thanks,
Zahra,

On Wed, Feb 21, 2018 at 4:21 AM 김규래 <msc...@naver.com> wrote:

> Hi Zahra,
> I've read your amazong paper for quite a while.
> There's one thing I cannot find answers.
>
> What were the label data that the models were trained on?
> I cannot find explanation about how 'optimal chunk size' and 'optimal
> prefetching distance' labels were collected.
>
> Previous work mostly states heuristically found labels.
> In the case of your paper, how was this done?
>
> My respects.
> msca8h at naver dot com
> msca8h at sogang dot ac dot kr
> Ray Kim
>
-- 
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSoC 2018, on "applying machine learning technques ..." project

2018-02-19 Thread Zahra Khatami
Hi Ray,

If you refer to the published paper, you could get more information.
Generally talking, this project uses compiler and runtime system to gather
both static and dynamic information to set HPX algorithm parameters such as
chunk sizes efficiently. Static information are gathered by a compiler,
which we used clang and we developed a new class for clang for this
purpose. Dynamic information are gathered by new HPX policies that we
developed for this purpose. You can look at the example in HPXML in HPX
GitHub.

Thanks,
Zahra

On Mon, Feb 19, 2018 at 9:04 AM 김규래 <msc...@naver.com> wrote:

> Hi Adrian,
>
> Thanks for clarifying.
>
> I think I pretty much get the picture.
>
>
>
> Looking forward to get in touch with Patrick in IRC within this week.
>
>
>
> Thanks everyone.
>
>
>
> msca8h at naver dot com
>
> msca8h at sogang dot ac dot kr
>
> Ray Kim
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>
-- 
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] FW: Applying machine learning techniques on HPX algorithms

2017-04-02 Thread Zahra Khatami
Hi Madhavan,

Our focus in the Summer is to implement our technique on a distributed
applications. This ML algorithms are a basic ML algorithms, which we have
implemented logistic regression model so far.
We have introduces a new ClangTool which make runtime technique, such as
for_each, to implement ML algorithms for selecting its parameters, such as
chunk size.

Madhavan, as I remember you told me that you have already submit a proposal
for another project, am I right?
And far as I know, a student cannot work on more than one project, and s/he
will not be paid for more than one.
So, I am not sure if you can work on two projects at the same time.



Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Sun, Apr 2, 2017 at 1:10 AM, #SESHADRI MADHAVAN# <
madhavan...@e.ntu.edu.sg> wrote:

> Hi Zahra,
>
>
>
> I had a brief look at the code. Can I understand the direction in which
> you want to proceed with the project?
>
>
>
> Currently, I see that a few basic algorithms have been implemented over
> the hpx framework. So, am I right to assume that more basic algorithms are
> to be implemented on top of the hpx framework?
>
>
>
> Best Regards,
>
> Madhavan
>
>
>
> *From:* #SESHADRI MADHAVAN#
> *Sent:* Friday, March 31, 2017 12:34 AM
> *To:* 'Zahra Khatami' <z.khatam...@gmail.com>;
> hpx-users@stellar.cct.lsu.edu
> *Subject:* RE: [hpx-users] Applying machine learning techniques on HPX
> algorithms
>
>
>
> Hi Zara,
>
>
>
> I have already submitted a proposal for HPX (HPXCL), so I won’t be
> submitting for this one.
>
>
>
> But I shall chip in contribution for this one as well as I find this to be
> an interesting area. My summer is currently free, so I shouldn’t have a
> problem in contributing to this in addition to HPXCL. I will begin by
> taking a look at the code base [1] and shall discuss further with you.
>
>
>
> Best Regards,
>
> Madhavan
>
>
>
> [1] https://github.com/STEllAR-GROUP/hpxML
>
>
>
> *From:* Zahra Khatami [mailto:z.khatam...@gmail.com
> <z.khatam...@gmail.com>]
> *Sent:* Friday, March 31, 2017 12:25 AM
> *To:* hpx-users@stellar.cct.lsu.edu; #SESHADRI MADHAVAN# <
> madhavan...@e.ntu.edu.sg>
> *Subject:* Re: [hpx-users] Applying machine learning techniques on HPX
> algorithms
>
>
>
> Hi Madhavan,
>
>
>
> Thank you for your interest. I would be happy to work with you on this
> project.
>
> This project is mainly about combining machine learning techniques,
> compiler optimizations and runtime methods, which is a new and super
> interesting idea for our group at least ;)
>
> We have implemented a major part in connecting these three areas together.
> However, we have tested it on a single node, not on a distributed version.
>
> As you have worked with Hadoop, so I believe that you have a good
> background in a distributed computing area.
>
> For the next step of this project, focused on a Summer, we plan to
> implement our proposed techniques on a distributed applications. The first
> phase of this would be implementing distributed machine learning
> techniques, such NN or SVM.
>
> Then we can analyze big data and design a learning model for our
> algorithms.
>
>
>
> So please start writing your proposal, emphasize on extending ideas about
> implementing distributed machine learning techniques with HPX and targeting
> them for tying compiler and runtime techniques.
>
> The proposal should be submitted before deadline (April 3rd). So I would
> suggest you to give me a first draft earlier, so we can work together for
> its final submission.
>
>
>
>
> Best Regards,
>
> * Zahra Khatami* | PhD Student
>
> Center for Computation & Technology (CCT)
> School of Electrical Engineering & Computer Science
> Louisiana State University
>
> 2027 Digital Media Center (DMC)
>
> Baton Rouge, LA 70803
>
>
>
>
>
> On Thu, Mar 30, 2017 at 11:13 AM, #SESHADRI MADHAVAN# <
> madhavan...@e.ntu.edu.sg> wrote:
>
> Hi Zahra,
>
>
>
> Sorry, for high jacking the previous email thread, changed the subject in
> this one.
>
>
>
> I have proposed the idea for working on HPXCL with Patrick, hence I shall
> not be proposing this as my GSoC project, but I would love to jump into 
> "Applying
> machine learning techniques on HPX algorithms". The project seems
> interesting and I have had some background implementing Machine Learning
> algorithms on Hadoop, predominantly in Java. But I have bee

Re: [hpx-users] Applying machine learning techniques on HPX algorithms

2017-03-30 Thread Zahra Khatami
Hi Madhavan,

Thank you for your interest. I would be happy to work with you on this
project.
This project is mainly about combining machine learning techniques,
compiler optimizations and runtime methods, which is a new and super
interesting idea for our group at least ;)
We have implemented a major part in connecting these three areas together.
However, we have tested it on a single node, not on a distributed version.
As you have worked with Hadoop, so I believe that you have a good
background in a distributed computing area.
For the next step of this project, focused on a Summer, we plan to
implement our proposed techniques on a distributed applications. The first
phase of this would be implementing distributed machine learning
techniques, such NN or SVM.
Then we can analyze big data and design a learning model for our
algorithms.

So please start writing your proposal, emphasize on extending ideas about
implementing distributed machine learning techniques with HPX and targeting
them for tying compiler and runtime techniques.
The proposal should be submitted before deadline (April 3rd). So I would
suggest you to give me a first draft earlier, so we can work together for
its final submission.


Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Thu, Mar 30, 2017 at 11:13 AM, #SESHADRI MADHAVAN# <
madhavan...@e.ntu.edu.sg> wrote:

> Hi Zahra,
>
>
>
> Sorry, for high jacking the previous email thread, changed the subject in
> this one.
>
>
>
> I have proposed the idea for working on HPXCL with Patrick, hence I shall
> not be proposing this as my GSoC project, but I would love to jump into 
> "Applying
> machine learning techniques on HPX algorithms". The project seems
> interesting and I have had some background implementing Machine Learning
> algorithms on Hadoop, predominantly in Java. But I have been through the
> process of designing and optimizing algorithms for execution in parallel
> which I believe will be useful for this. Let me know how I can get started.
>
>
>
> Best Regards,
>
> Madhavan
>
>
>
> *From:* hpx-users-boun...@stellar.cct.lsu.edu [mailto:hpx-users-bounces@
> stellar.cct.lsu.edu <hpx-users-boun...@stellar.cct.lsu.edu>] *On Behalf
> Of *Zahra Khatami
> *Sent:* Thursday, March 30, 2017 11:56 PM
> *To:* denis.bl...@outlook.com
> *Cc:* hpx-users@stellar.cct.lsu.edu
> *Subject:* Re: [hpx-users] [GSoC 2017] Proposal for re-implementing
> hpx::util::unwrapped
>
>
>
> Hi Denis,
>
>
>
> I am so glad that you are interested in HPX GSOC.
>
> I have looked at your github and your projects seems so interesting for
> me. Feel free to write your proposal and submit it before April 3rd. I
> would be happy to be your mentor, as I have found your background match
> with my current projects as well. If you go through
>
>
>
> https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-2017-
> Project-Ideas#re-implement-hpxutilunwrapped
>
>
>
> You will find a project "Applying machine learning techniques on HPX
> algorithms", which I think it could be a good fit for you too. Our team has
> been working on it since 2-3 months ago and so far we have got interesting
> results, which are going to be prepared for a conference paper. In this
> project we are using LLVM and Clang LibTooling to implement a machine
> learning techniques on an HPX parallel algorithm, and we have applied and
> tested them on an hpx loop.
>
> So as another option, you could look at this GSOC project idea and write a
> brief proposal about how you can implement it.
>
>
> Best Regards,
>
> * Zahra Khatami* | PhD Student
>
> Center for Computation & Technology (CCT)
> School of Electrical Engineering & Computer Science
> Louisiana State University
>
> 2027 Digital Media Center (DMC)
>
> Baton Rouge, LA 70803
>
>
>
>
>
> On Thu, Mar 30, 2017 at 10:55 AM, Patrick Diehl <patrickdie...@gmail.com>
> wrote:
>
> Hi Denis,
>
> the ides sounds good, for GSoC, you should submit your proposal at their
> official website. You can use this template [0] and our guidelines [1]
> to prepare your proposal.  The deadline for the submission is
>
> > April 3 16:00 UTC Student application deadline
>
> We are looking forward to review your proposal.
>
> Best,
>
> Patrick
>
> [0] https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-Submission-Template
>
> [1] https://github.com/STEllAR-GROUP/hpx/wiki/Hints-for-
> Successful-Proposals
>
>
> On 30/03/17 11:29 AM, Denis Blank wrote:
> > Hello HPX Developers,
> >
> >