Re: [mlpack] [GSoC] RL using mlpack's codebase

2022-04-13 Thread Nanubala Gnana Sai
Hi Suryansh

Glad to see you're interested in the project. Just so we are on the same
page, this project is on creating an RL Environment for simulation rather
than implementing an algorithm. There has been previous work
on
the Rainbow algorithm. From the paper, I gather "integrated agent"
describes the rainbow algorithm itself. If you're looking for adding RL
algorithms and benchmarking against existing ones, the correct project
would be Reinforcement Learning.

Note
that it is different from Procedural RL Environment creation
.
It's important that the distinction is acknowledged.

If you plan to do both: implement an RL algorithm (benchmark + test + docs)
and create a separate Procedural generated RL Environment, that's good!
That means you'd be combining two separate ideas which is fine, we allow
flexibility. However, be wary of the size of the project. Personally, I
would pick only one. Please take some time to choose which course of action
you want to take. All of the above ideas are acceptable to me, provided the
timeline and scope is reasonable and you have sufficient background
knowledge.

Finally, as Fauz mentioned, be sure to link your open-source contributions,
relevant courses, projects etc. to help us gauge the success of the
project. A good track record of open source contributions and a demo is a
major plus (although not strictly necessary).

Eager to hear from you
S



On Wed, Apr 13, 2022 at 12:54 PM Mohd Fauz  wrote:

> Hi Suryansh,
>
> Glad to see you are interested in the project. I observe that you
> mentioned some proficiency stats in your mail. They seem to align well with
> the requirements. I require you to draft a doc where you can provide :
>
>-  Links to your projects that utilize these skills/technologies.
>- Any courses that you took related to them.
>
> These points are just for giving you a direction. You can add any
> additional info that you think can help us understand your skills better
> concerning this project. We'll have further discussion in the doc itself.
>
> Eager to hear from you.
>
> On Wed, Apr 13, 2022 at 12:32 PM Suryansh Shrivastava <
> me.suryans...@gmail.com> wrote:
>
>> Hi,
>>
>> I am sorry for being late in starting a conversation regarding my GSoC
>> contribution application.
>> I am a 2nd year undergraduate in the field of Electronics and Electrical
>> Engineering. I have been venturing into the field of RL for the last few
>> months. I have also been going through the MLPack codebase since last week
>> and made myself comfortable with it to the great extent.
>>
>> I have planned to implement the Integrated agent presented in the
>> Rainbow: Combining Improvements in Deep Reinforcement Learning, with this I
>> will also be benchmarking the algorithm against the other RL algorithms
>> already implemented using the mlpack codebase. Can you please suggest me or
>> give some guidance for the same.
>>
>> Also, looking at the new project idea recently uploaded, I consider
>> myself somewhat adept in working on it as I have intermediate know-how of
>> Unity game engine and also a basic understanding of PRNG algorithms. I
>> would like to know more about what is required and what other skills I may
>> be requiring.
>>
>> Thanks a lot.
>>
>
___
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack


Re: [mlpack] GSoC'22 Project Proposal: Multi-objective Optimizers

2022-04-10 Thread Nanubala Gnana Sai
Hello Ahmed

Good start, the proposal looks decent as well. It could surely go through
several improvements. Things like: How it will fit our codebase, emphasis
on docs & thorough testing is a major plus.

Do take some decent time to understand how the APIs work in ensmallen work
especially Multiobjective module. There are a few notebooks in
mlpack/examples demonstrating their practical application, you'd find them
useful.


Also, have a look https://github.com/jonpsy/SPEA-2 at this repository which
may prove helpful for your task. Further, open source orgs like pymoo and
pagmo2 can serve as a great source of inspiration as well.


Rest we can discuss in the doc.

I'll look forward for more of your contributions.

S



On Sun, 10 Apr, 2022, 5:11 pm Ahmed Abdelatty, 
wrote:

> Hi,
>
> I’m Ahmed Abdelatty, a computer engineering student at Cairo
> University. I am interested in template metaprogramming, competitive
> programming and algorithms, and have a strong knowledge of C++. I have been
> reviewing the mlpack and ensmallen codebases for the past month and made
> the following two PRs: #3173 
>  and #341 .
>
>
>
> I’d like to contribute to mlpack under GSoC’22 by taking on the
> Multi-objective Optimizers project. I plan to add the SPEA2 algorithm and
> test it against the ZDT test suite. I intend it to be a medium-sized
> project (175 hours).
>
>
>
> The following is a link to my draft proposal
> .
> I’m eagerly looking forward to your replies in order to improve it.
>
>
>
> Best Regards,
>
> Ahmed Abdelatty
>
> GitHub username: ahmedr2001
> ___
> mlpack mailing list
> mlpack@lists.mlpack.org
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
>
___
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack


[mlpack] Announcing new RL Environment Generation Project

2022-04-06 Thread Nanubala Gnana Sai
Dear mlpack folks

I'm delighted to announce a new project which has been added to the mlpack
GSoC idea list for Reinforcement learning environment generation.

This project relates to Procedural generation of the environment, so that
RL agents can face real world situations. Please do find the new idea on
the ideas list.

We're planning to add some more gif/images so that potential students can
get a deeper understanding of it, so stay tuned!

This is a special project for us, since we're co-mentoring with someone
external: Shaikh Mohd. Fauz,  to mlpack member list. This is owing to his
experience in Game Engine's which was crucial for our previous GSoC
projects.

Please feel free to reach out to the mentors via email or community chat.
We're excited to have you!

Reach out to me: jonpsy...@gmail.com
Reach out to Fauz: fauz212...@gmail.com

S
___
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack


Re: [mlpack] Mid GSoC Report (Revamp mlpack bindings)

2021-07-12 Thread Nanubala Gnana Sai
So good! Love that integration with other widely used APIs is being worked
upon.

On Tue, 13 Jul, 2021, 12:39 am Nippun Sharma, 
wrote:

> Hi everyone,
> We are almost halfway through GSoC, so I have written a blog about
> the progress of my project.
>
> Please give it a read:
> https://nippunsharma.github.io/2021/07/12/GSoC-2021-Mid-Report/
>
> Regards,
> Nippun
> ___
> mlpack mailing list
> mlpack@lists.mlpack.org
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
>
___
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack


Re: [mlpack] GSOC WITHDRAWAL

2021-04-14 Thread Nanubala Gnana Sai
Hey Oleksandr,

I do understand the mail wasn't directed at me, but I believe you should
stay. As the mentors said, they can still discuss about the proposal if
need be.  Besides,  I think your LM-CMA PR is great, perhaps we could work
on that when you feel like it.  This will give you a great opportunity to
learn and improve.

Best wishes
NGS


On Tue, 13 Apr, 2021, 9:50 pm Oleksandr Nikolskyy, 
wrote:

> Hi Ryan,
> Thanks for the feedback. After thinking about the points you mentioned, I
> decided to withdraw my proposal, as there is no time left to make it
> stronger.
> I hope to contribute to mlpack in near future.
> Best,
> Oleksandr
> ___
> mlpack mailing list
> mlpack@lists.mlpack.org
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
>
___
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack


[mlpack] GSOC : Adding Optimizers to MultiObjective module

2021-03-18 Thread Nanubala Gnana Sai
Hey all,
First, I must thank the devs for considering my ideas and suggesting the
feasibility of it. After some careful consideration of GSoC timeline I
propose the following:

Adding:
Month 1:
a) Strength Pareto Evolutionary Algorithm II (SPEA-II) : One of the core
multiobjective algorithm along with NSGA-II. It will be really nice to
have. I've already begun some work on it. ( 1 month)

Month2:
b) Fully implementing MOEA/D-DE: For the past week, I've refactored over
60% of the code of MOEA/D-DE, fixing bugs, cleaning APIs etc. It is almost
done, you can track it here .
I wish to continue this in GSoC (if its not merged already). (Less than a
week or even Pre-GSOC)

(Approx 2 weeks)
c) Adding test suite for MOO: Any MOO module is incomplete without a test
suite. I propose to add the following:
 i) ZDT : I've already coded it fully and it performs correctly on
trivial cases. Track it here 
 ii) DLTZ: I haven't started this yet but this would be a nice
addition.

d) Miscellanous: Needles to say, I'll be making PR's and fixing bugs which
are related to my main PR.

Potential Mentors: Anyone really :) , I've intentionally kept the scope
short and made it such that I can work 100% indepedently. But it'd be nice
to be able to discuss my ideas (although I'm sure IRC channel will help me
with this with open arms).

Best
NGS
___
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack


Re: [mlpack] [GSOC] Restructing Multi Objective Optimization Module

2021-03-12 Thread Nanubala Gnana Sai
Thanks for the valuable feedback!

Using the policy-design pattern sounds like a good idea to me, one reason
> why
> we haven't done this for the existing evolution-based optimizers is that
> they slightly
> differ in functionality
>

 I was pondering about the same situation. One of the key problem, the
number/type of arguments may differ for each policy of the same type. For
instance, DE crossover and SBX require different arguments. Taking
inspiration from DEAP, I figured we should do the following:

Step 1) Create a wrapper around map to create and store argument during
runtime.
Step 2) Initialize the policy with its argument in the driver file. These
arguments are given through template for ex: ```template>```
Step 3) The policy would take two inputs
i) The argument map: Stores the runtime argument like current
populationIdx etc.
ii) The problem: Stores info about lowerBound, upperBound. This can
be used for repair heuristic.

What does it solve?
1) Avoids lengthy argument passing to ctor.
2) Args are confined to where they are needed. Ex: CrossOver args are given
to policy instead of optimizer ctor.
2) Code is more generic. Easily change the crossover, mutation etc. strategy
*.*
3) And ofcourse, reduces code bloating.

I think I should create an issue to better address this.

I am interested to see if you have some method in mind for c)
>

Strength Pareto Evolutionary Algorithm and NSPSO(Non dominated sorting PSO).


Finally, I realize that this could be a lot of work. So I want to choose
between either refactoring or adding the new algorithms. I'll keep the
scope short so that we can guarantee successful completion of GSoC :)

Best
NGS
___
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack


[mlpack] [GSOC] Restructing Multi Objective Optimization Module

2021-03-10 Thread Nanubala Gnana Sai
Greetings! Team mlpack.

I'm Nanubala Gnana Sai from the Indian Institute of Information Technology,
Sri City.  My IRC-Channel ID is @jonpsy. For quite some time, I've been
contributing to mlpack, more specifically to the MOO algorithms of
ensmallen. #269 <https://github.com/mlpack/ensmallen/pull/269> #263
<https://github.com/mlpack/ensmallen/pull/263>.

During my work, I've taken references from multiple resources like Pagmo
<https://esa.github.io/pagmo2/>, pymoo <https://pymoo.org/>, DEAP
<https://deap.readthedocs.io/en/master/> and research papers. I couldn't
help but notice these *inefficiencies*

* Operators like Crossover, Mutation etc. are implemented per algorithm.
* This causes bloated coated and increases potential errors.

I really like how others, for ex: DEAP approaches this, they've implemented
all the varieties of operators on a separate module
<https://deap.readthedocs.io/en/master/tutorials/basic/part2.html> which
can be called on by the optimizer. Upon reading our style guide, I think
this fits perfectly with our *policy based *design principle.

My thoughts are, we could:
1) Create separate classes for CrossOvers, Mutations etc.
2) Use templates to access their functionality (much like the KernelType
Policy <https://www.mlpack.org/doc/stable/doxygen/kernels.html>)

During the past year, I've contributed to shogun repository extensively so
I have exposure building OOP-based structures, although admittedly I'm new
to the policy-based design principle.

During my GSOC I plan on
a) Creating policy based structuring to outsource operator functions.
b) Restructure currently implemented algorithms to fit to this principle
c) Miscellaneous: Add more multiobjective algorithms.

Suggestions and criticism are welcome!


Best
NGS
___
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack