Hi Marcus, I am glad to hear back from you. Taking a detailed look at both the projects, I have decided to go ahead with the second project i.e. "Essential Deep Learning Modules". Just to reiterate, I am planning to implement GAN, BRN and RBM. I will draft a proposal and share it with you on Google Summer of Code website for feedback.
Kind regards, Abhinav Moudgil On Fri, Mar 24, 2017 at 10:25 PM, Marcus Edel <[email protected]> wrote: > Hello Abhinav, > > thanks for getting in touch and welcome. > > I would love to contribute to mlpack during this summer. It would be great > if > you could elaborate your views on the above projects. Looking forward to > your > guidance. > > > That are some really neat project's you listed above. I think Ryan tried > something "similar" as training a language model on a jokes corpus. Anyway > here > are my two cents to the projects mentioned above, I think each project is > equally interesting and depending on what you like to do equally difficult > and > at the same time rewarding. The intention behind each project is to work on > recent ideas and to provide a fast implementation at the end of the > summer. At > the end I can't help you with the decision since you worked on each topic > it's > even difficult to give you a recommendation. > > I hope something I said was helpful, > > Thanks, > Marcus > > On 23 Mar 2017, at 16:25, Abhinav Moudgil <[email protected]> > wrote: > > Hi, > > I am Abhinav Moudgil, a senior undergraduate research student in Deep > Learning and Computer Vision, working on PR #942 > <https://github.com/mlpack/mlpack/pull/942>. I went through mlpack > project ideas > <https://github.com/mlpack/mlpack/wiki/SummerOfCodeIdeas#essential-deep-learning-modules> > and I found the following two projects really interesting (in preference > order) for Google Summer of Code 2017: > > *1. Reinforcement Learning (RL)* > It would be a great learning experience for me to implement RL algorithms, > which are fast and scalable. Previously, I have studied various RL > algorithms well like Monte Carlo Policy Gradient (PG) with REINFORCE > <https://gist.github.com/abhinavmoudgil95/138db4c55c42f91f4c858294acadb771>, > Deep Q-learning (for discrete and continuous state space), Deep > Deterministic PG with Actor-Critic networks, Policy Iteration for Maze > environment, Hill Climbing > <https://gist.github.com/abhinavmoudgil95/108123c880488965b8c1744cacd60dd6>, > Random Search > <https://gist.github.com/abhinavmoudgil95/6fcb2db7314e6c4f6b7a028dfe1f27db> > etc. > I have implemented and tested them in Python using Tensorflow. My OpenAI > gym profile is accessible here > <https://gym.openai.com/users/abhinavmoudgil95>. I will open source all > my RL codes in a separate repository soon. > > *2. Essential Deep Learning Modules* > I have studied the relevant literature for this project in the past and I > like converting mathematical equations from research papers to code. In > Summer 2016, I worked on feature engineering as a Google Summer of Code > project <https://abhinavmoudgil95.github.io/2016-08-23/gsoc-conclusion/> > with CERN SFT where I worked on some advanced feature extraction methods > like Deep Autoencoders, Feature Clustering, Hessian Locally Linear > Embedding etc. So, I explored literature on Restricted Boltzmann Machines, > Hopfield Networks etc. In this project, I would like to implement the > following models: > > - RBM - Studied extensively during my Google Summer of Code, 2016. > - GAN - This semester, I am a Teaching Assistant for the course > Statistical Methods in AI at my university IIIT-H > <https://www.iiit.ac.in/>. As a part of this job, I am mentoring > projects like Coupled GANs, Conditional GANs. I have studied the GAN > literature well along with its variations like DCGANs, Improved Techniques > for training GANs by OpenAI, Class Conditional GANs by Yann Lecun etc. > - BRN - I solved > <https://abhinavmoudgil95.github.io/2017-03-01/funnybot/> OpenAI > Request for Research problem #2 > <https://openai.com/requests-for-research/#funnybot>. For that, I > studied Recurrent Neural Networks in detail along with variations of it > like LSTMs, Attention Models, BRNs. Currently, I am working on OpenAI > Request for Research #3 > <https://openai.com/requests-for-research/#im2latex> which involves > implementing Attention Models and Bidirectional RNNs. > > *Open Source Experience: * > I worked <https://github.com/abhinavmoudgil95/gsoc-2016> with CERN SFT on > feature engineering module as a Google Summer of Code student. I > contributed > <http://wiki.opencog.org/wikihome/index.php/Special:Contributions/Amod95> > to OpenCog foundation by fixing several bugs and writing an installation > script <https://github.com/opencog/ocpkg/pull/50> for Mac OS X. I also > contributed to Shogun, a Machine Learning toolbox where I worked on > improving and benchmarking > <https://github.com/shogun-toolbox/shogun/issues/3048> basic ML > algorithms like PCA, LDA etc. > > I would love to contribute to mlpack during this summer. It would be great > if you could elaborate your views on the above projects. Looking forward to > your guidance. > > Kind regards, > > Abhinav Moudgil > Github: https://github.com/abhinavmoudgil95 > Website: https://abhinavmoudgil95.github.io/ > LinkedIn: https://www.linkedin.com/in/abhinavmoudgil/ > > >
_______________________________________________ mlpack mailing list [email protected] http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
