I thought my mails were public, but as you've pointed out, they arent. I
notice that I have replied to your email id instead of the mailing list.
I would keep in mind that my mails go public.
On 02-Mar-2018 6:36 PM, "Marcus Edel" <marcus.e...@fu-berlin.de> wrote:
do you mind if I responde to this on the public mailing list? That way more
people can jump in and provide input.
On 1. Mar 2018, at 20:11, Akash Shivram <akashshivr...@gmail.com> wrote:
Thanks a lot for replying.
I was occupied with the examination and assignment load so I was inactive
the previous weeks.
I have read 'Playing Atari with Deep Reinforcement Learning'. I would like
to share that my previous semester's project was on demonstrating how Deep
Reinforcement Learning could be applied on Self driving cars. I had read
this paper for the background study and it helped a lot in creating a
software model for the project. This project involved a hardware
implementation that was possible using ROS.
Now we are preparing a paper to publish regarding the same.
I have been active in the part of reading the mailing list. I like the idea
of creating tutorials on OpenAI gym. I would love to help in making it
possible. We could even extend this idea not only the tutorials but to
blogs that not only have tutorials on using OpenAI gym but also on machine
learning techniques and algorithms, that would have beginner to advanced
level tutorials on using mlpack for machine learning, just like scikit
learn and TensorFlow. This would help newbies in ML who already have
knowledge of C++ easily use mlpack.
As suggested I would move forward with policy gradients. I know I am late,
and I hope to catch up as fast as possible.
On 14-Feb-2018 3:52 PM, "Marcus Edel" <marcus.e...@fu-berlin.de> wrote:
> Hello Akash,
> thanks for getting in touch, glad you like the project idea.
> Getting familiar with the codebase especially
> src/mlpack/methods/reinforcement_learning/ should be the first step, as
> already pointed out. Running the tests: (rl_components_test.cpp)
> 'bin/mlpack_test -t RLComponentsTest' and (q_learning_test.cpp)
> -t QLearningTest' should help to understand the overall structure. Also you
> might find Shangtong's blog posts helpful:
> If you like you can work on a simple RL method like (stochastic) Policy
> Gradients and use that to jump into the codebase, but don't feel obligated.
> > I am thinking of working on my application at the earliest this week. Is
> that ok
> > ? I am going through the code base and as I find something to talk
> about/on, can
> > I trouble you people with my questions? There might be a lot, some even
> stupid !
> Sounds like a good plan, let us know if we should clarify anything we are
> to help.
> > On 13. Feb 2018, at 19:08, Akash Shivram <akashshivr...@gmail.com>
> > Hey there!
> > Congratulations on getting into GSoC' 18!!
> > I was going through the organisations participating this year searching
> for organisations working in ML and DL related field. I came across mlpack
> and was delighted to see a project on RL!! I like RL and and wanted some
> project to do in this field.
> > I have experience working with Neural Networks, Reinforcement Leaning,
> and Deep Q Learning. As this is the first day of me with your repository,
> > I have gone through requirements for an applicant for 'Reinforcement
> Learning' project and trying to go through as many papers listed as
> > Are there any more 'bonus' papers, or anything extra that wold be
> > Moreover, I am thinking of working on my application at the earliest
> this week. Is that ok ? I am going through the code base and as I find
> something to talk about/on, can I trouble you people with my questions?
> There might be a lot, some even stupid !
> > Thank you
> > PS : This mail went too long!! Sorry for the long read !
> > _______________________________________________
> > mlpack mailing list
> > firstname.lastname@example.org
> > http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
mlpack mailing list