Thank you Sir for such a kind and detailed reply. I will go through each
point in detail.

Thanks and Regards,
Pratyush

On Sun, Feb 28, 2016 at 5:05 AM, Hartmut Kaiser <[email protected]>
wrote:

>
> Hey Pratyush,
>
> Thanks for your interest in one of our projects. I'm cc'ing this answer to
> our mailing list hpx-users for
> others to chime in as well. Please post your responses there as well.
>
> > I am pursuing Masters at Indian Institute of Technology Bombay in the
> > field of Geo-informatics. I had completed Bachelors in the field of
> > Information Technology. I am having a work experience of 1 year working
> > with Cognizant Technology Solutions in Java domain. I am currently
> working
> > on "Large Scale Spatial Data processing using GPU".
> > I am motivated by the work done by STEllAR-GROUP  in GSoC. I am
> interested
> > in contributing to STEllAR-GROUP. This is regarding "Work on parallel
> > algorithms for HPX"(Project Proposal for GSoC'16). Can you please help me
> > in getting started with the Contribution.
>
> For the parallel algorithms, I'd suggest starting with building and
> running HPX. Look at the examples, run them, understand them. Once you have
> familiarized yourself a bit with HPX you can go the next steps.
>
> You might want to look into the existing (local) parallel algorithms we
> have implemented. hpx::parallel::for_each is a good starting point (
> https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/algorithms/for_each.hpp).
> This will give you a rough understanding of HPX's
> tasking/threading/executor subsystem.
>
> Reading the standards proposal describing the APIs we implemented might be
> helpful as well:
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4409.pdf,
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/n4569.pdf. Well,
> reading all of this is probably not necessary (or even possible), but
> please keep those documents in mind whenever you're looking for some
> detailed information.
>
>
>
> All in all, we have several tickets in our ticket system which relate to
> this topic. I'll list those for you here:
>
> - https://github.com/STEllAR-GROUP/hpx/issues/1141,
> https://github.com/STEllAR-GROUP/hpx/issues/1836: As you can see from the
> first of those, for the local parallel algorithms we mainly miss
> implementations of the algorithms related to sorting (partial_sort,
> partial_sort_copy, nth_element, stable_sort, etc.). Some of the algorithms
> based on some derivative of a parallel scan are missing too (uniq,
> remove_if, partition, etc.). The others have either been already
> implemented or are being currently worked on.
>
> - https://github.com/STEllAR-GROUP/hpx/issues/1668: We need some work
> done for adapting the existing algorithms to the Ranges TS. Here we have
> been able to touch only a few.
>
> - https://github.com/STEllAR-GROUP/hpx/issues/1338: More involved (but
> also more interesting) would be to work on enabling some of the existing
> local only algorithm implementations to support distributed data
> structures, like partitioned_vector. As an example, please look at the
> distributed implementation of parallel::for_each here:
> https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/segmented_algorithms/for_each.hpp
> ).
>
> After that, you should try to understand how HPX handles remote procedure
> invocations (HPX actions). This is necessary to get a grips on how the
> distributed parallel algorithms are implemented. This will also help to see
> what hpx::partitioned_vector is all about and how it integrates with the
> distributed parallel algorithms. The paper
> http://afstern.org/matt/segmented.pdf should give you a head-start on
> segmented iterators which is describing the high level scheme we used for
> hpx::partitioned_vector.
>
> Please don't hesitate to get back to us with more questions.
>
> Regards Hartmut
> ---------------
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
>
>
>
>


-- 
Regards,
Pratyush V. Talreja,
M.Tech. CSRE,
Indian Institute of Technology, Bombay
_______________________________________________
hpx-users mailing list
[email protected]
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to