Sure you can do that, but you asked for feedback.

> . I hope you look at it and provide me with feedback if you have time
> so that I can change it accordingly.

But we can not look into your proposal until the deadline and thus can
not provide feedback.

Best,

Patrick

On 03/04/17 10:56 AM, Aditya wrote:
> Hi Patrick,
>
> Before the deadline, I think I can still modify the shared google doc
> and reupload the pdf version of the update proposal.
> Anyway, I have submitted the final pdf in the GSoC portal.
>
> Thanks,
> Aditya
>
>
>
>
> On Mon, Apr 3, 2017 at 4:59 PM, Patrick Diehl <[email protected]
> <mailto:[email protected]>> wrote:
>
>     Hi Aditya,
>
>     first,  we can only see the final submission after the deadline.
>     Second, you can not change the final submission anymore.
>
>     Best,
>
>     Patrick
>
>     Aditya <[email protected]
>     <mailto:[email protected]>> schrieb am Mo., 3. Apr. 2017,
>     04:30:
>
>         Hello Zahra and Kaiser,
>
>         I have shared the final proposal with STE||AR Group through
>         the GSoC website. I hope you look at it and provide me with
>         feedback if you have time so that I can change it accordingly.
>
>         Thanks a lot for the support.
>
>         Regards,
>         Aditya
>
>
>
>
>         On Sun, Apr 2, 2017 at 8:12 PM, Zahra Khatami
>         <[email protected] <mailto:[email protected]>> wrote:
>
>             Hi Aditya,
>
>             Thank you for your interest in the machine learning
>             project. As Dr. Kaiser explained, a compiler gathers
>             static information for ML, then ML will select the
>             parameters, such as chunk sizes, for HPX's techniques,
>             such as loop. 
>             We have worked on this project since a couple of months
>             ago, and so far we have got interesting results from our
>             implementation. 
>             Our focus in the Summer is to implement our technique on a
>             distributed applications. 
>             So if you have a background in ML and distributed
>             computing, it would be enough to work on this topic.
>             I am pretty sure that this phase will result in a
>             conference paper as its new and super interesting ;)
>             So if you are interested in this project, go ahead and
>             write your proposal before its deadline. 
>
>
>
>             Best Regards,*
>
>             Zahra Khatami* | PhD Student
>             Center for Computation & Technology (CCT)
>             School of Electrical Engineering & Computer Science
>             Louisiana State University
>             2027 Digital Media Center (DMC)
>             Baton Rouge, LA 70803
>
>
>             On Sun, Apr 2, 2017 at 7:04 AM, Hartmut Kaiser
>             <[email protected]
>             <mailto:[email protected]>> wrote:
>
>                 Hey Aditya,
>
>                 > It would be great if some of you could guide me
>                 through the project
>                 > selection phase so that I can make my proposal as
>                 soon as possible and get
>                 > it reviewed too.
>
>                 The machine learning project aims at using ML
>                 techniques to select runtime parameters based on
>                 information collected at compile time. For instance in
>                 order to decide whether to parallelize a particular
>                 loop the compiler looks at the loop body and extracts
>                 certain features, like the number of operations or the
>                 number of conditionals etc. It conveys this
>                 information to the runtime system through generated
>                 code. The runtime adds a couple of dynamic parameters
>                 like number of requested iterations and feeds this
>                 into a ML model to decide whether to run the loop in
>                 parallel or not. We would like to support this with a
>                 way for the user to be able to automatically train the
>                 ML model on his own code.
>
>                 I can't say anything about the Lustre backend, except
>                 that Lustre is a high-performance file system which we
>                 would like to be able to directly talk to from HPX. If
>                 you don't know what Lustre is this is not for you.
>
>                 All to All communications is a nice project, actually.
>                 In HPX we sorely need to implement a set of global
>                 communication patterns like broadcast, allgather,
>                 alltoall etc. All of this is well known (see MPI)
>                 except that we would like to adapt those to the
>                 asynchronous nature of HPX.
>
>                 HTH
>                 Regards Hartmut
>                 ---------------
>                 http://boost-spirit.com
>                 http://stellar.cct.lsu.edu
>
>
>                 >
>                 > Regards,
>                 > Aditya
>                 >
>                 >
>                 >
>                 > On Sun, Apr 2, 2017 at 5:21 AM, Aditya
>                 <[email protected]
>                 <mailto:[email protected]>> wrote:
>                 > Hello again,
>                 >
>                 > It would be great if someone shed light on the below
>                 listed projects too
>                 >
>                 > 1. Applying Machine Learning Techniques on HPX
>                 Parallel Algorithms
>                 > 2. Adding Lustre backend to hpxio
>                 > 3. All to All Communications
>                 >
>                 > I believe I will be suitable for projects 2 and 3
>                 (above). As part of my
>                 > undergrad thesis (mentioned in the earlier email) I
>                 worked with Lustre
>                 > briefly (we decided, lustre was an overkill for our
>                 scenario as we'd have
>                 > to re organize data among nodes even after the
>                 parallel read). I have
>                 > worked with MPI on several projects (my thesis and
>                 projects in the
>                 > parallel computing course) and have a basic
>                 understanding of all to all
>                 > communications work.
>                 >
>                 > If someone could explain what would be involved in
>                 project 1, it'd be
>                 > great.
>                 >
>                 > Also, please let me know what is expected of the
>                 student in projects 2 and
>                 > 3.
>                 >
>                 > Thanks again,
>                 > Aditya
>                 >
>                 >
>
>
>
>
>         _______________________________________________
>         hpx-users mailing list
>         [email protected]
>         <mailto:[email protected]>
>         https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>         <https://mail.cct.lsu.edu/mailman/listinfo/hpx-users>
>
>

_______________________________________________
hpx-users mailing list
[email protected]
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to