Hey Aditya, > It would be great if some of you could guide me through the project > selection phase so that I can make my proposal as soon as possible and get > it reviewed too.
The machine learning project aims at using ML techniques to select runtime parameters based on information collected at compile time. For instance in order to decide whether to parallelize a particular loop the compiler looks at the loop body and extracts certain features, like the number of operations or the number of conditionals etc. It conveys this information to the runtime system through generated code. The runtime adds a couple of dynamic parameters like number of requested iterations and feeds this into a ML model to decide whether to run the loop in parallel or not. We would like to support this with a way for the user to be able to automatically train the ML model on his own code. I can't say anything about the Lustre backend, except that Lustre is a high-performance file system which we would like to be able to directly talk to from HPX. If you don't know what Lustre is this is not for you. All to All communications is a nice project, actually. In HPX we sorely need to implement a set of global communication patterns like broadcast, allgather, alltoall etc. All of this is well known (see MPI) except that we would like to adapt those to the asynchronous nature of HPX. HTH Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu > > Regards, > Aditya > > > > On Sun, Apr 2, 2017 at 5:21 AM, Aditya <[email protected]> wrote: > Hello again, > > It would be great if someone shed light on the below listed projects too > > 1. Applying Machine Learning Techniques on HPX Parallel Algorithms > 2. Adding Lustre backend to hpxio > 3. All to All Communications > > I believe I will be suitable for projects 2 and 3 (above). As part of my > undergrad thesis (mentioned in the earlier email) I worked with Lustre > briefly (we decided, lustre was an overkill for our scenario as we'd have > to re organize data among nodes even after the parallel read). I have > worked with MPI on several projects (my thesis and projects in the > parallel computing course) and have a basic understanding of all to all > communications work. > > If someone could explain what would be involved in project 1, it'd be > great. > > Also, please let me know what is expected of the student in projects 2 and > 3. > > Thanks again, > Aditya > > _______________________________________________ hpx-users mailing list [email protected] https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
