Hello Arjun,

thanks for the feedback on this one. Agreed, this could end up as an additional
feature, I think we could keep that in mind for the interface if we like to go
for it.

Thanks,
Marcus

> On 14. Mar 2018, at 03:03, arjun k <[email protected]> wrote:
> 
> Hi,
> 
> The paper looks interesting, the idea to introduce RL to relieve the lack of 
> data is good. But what I found is that it makes some assumptions about the 
> data that is that the latent representation can be divided into disentangled 
> and non-interpretable variable. Usually what happens is these assumptions do 
> not scale well to different data. Otherwise overall the model looks promising 
> and would be interesting implement. Maybe we could add this as a feature to 
> main VAE framework(like an alternative for use in semi-supervised learning 
> scenarios) since VAE as of itself is unsupervised. Let me know what you 
> think. Thank you,
> 
> Arjun
> 
> On Tue, Mar 13, 2018 at 7:29 PM, Marcus Edel <[email protected] 
> <mailto:[email protected]>> wrote:
> Hello Arjun,
> 
>> Thank you, Marcus, for the quick reply.  That clarifies the doubts I had. I 
>> am
>> interested in both the projects, reinforcement learning and variational
>> autoencoders with almost equal importance to both. So is there any way that I
>> can involve in both the projects. Maybe focus on one and have some 
>> involvement
>> in the other?. In that case, how would I write a proposal to this 
>> effect(write
>> as two separate proposals or mention my interest in both under one proposal)?
> 
> I think we could combine both ideas, something like
> https://arxiv.org/abs/1709.05047 <https://arxiv.org/abs/1709.05047> could 
> work, let me know if that would is an
> option you are interested in.
> 
> Thanks,
> Marcus
> 
>> On 12. Mar 2018, at 23:45, arjun k <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Hi,
>> 
>> Thank you, Marcus, for the quick reply.  That clarifies the doubts I had. I 
>> am interested in both the projects, reinforcement learning and variational 
>> autoencoders with almost equal importance to both. So is there any way that 
>> I can involve in both the projects. Maybe focus on one and have some 
>> involvement in the other?. In that case, how would I write a proposal to 
>> this effect(write as two separate proposals or mention my interest in both 
>> under one proposal)?
>> 
>> On Mon, Mar 12, 2018 at 9:41 AM, Marcus Edel <[email protected] 
>> <mailto:[email protected]>> wrote:
>> Hello Arjun,
>> 
>> welcome and thanks for getting in touch.
>> 
>>> I am Arjun, currently pursuing my Master's in Computer Science at the 
>>> University
>>> of Massachusetts, Amherst, I came across the project on variational 
>>> autoencoders
>>> and Reinforcement learning project and they look very interesting. Hope I 
>>> am not
>>> too late.
>> 
>> The application phase opens today, so you are not too late.
>> 
>>> I am more interested in the reinforcement learning project as it involves 
>>> some
>>> research in a field that I am working on and would like to get involved. As 
>>> I
>>> understand, coding up an algorithm and implementing it in a single game 
>>> would
>>> not be much of an issue. How many algorithms are proposed to be benchmarked
>>> against each other? Is there any new idea that is being tested or the 
>>> research
>>> component is the benchmark alone?
>> 
>> Keep in mind each method has to be tested and documented, which takes time, 
>> so
>> my recommendation is to focus on one or two (depending on the method). The
>> research component is two-fold, you could compare different algorithms or
>> improve/ extend the method you are working on e.g. by integrating another 
>> search
>> strategy, but this isn't a must, the focus is to extend the existing 
>> framework.
>> 
>>> In the variational encoders I am quite familiar with generative modeling 
>>> having
>>> worked on some research projects myself(https://arxiv.org/abs/1802.07401 
>>> <https://arxiv.org/abs/1802.07401>), As we
>>> can make variational encoders is just a training procedure, how abstracted 
>>> are
>>> you intending the implementation to be. Should the framework allow the user 
>>> to
>>> be able to customize the underlying neural network and add additional 
>>> features
>>> or is it highly abstracted with no control over the underlying architecture 
>>> and
>>> only able to use VAE as a black box?
>> 
>> Ideally, a user can modify the model structure based on the existing
>> infrastructure, providing a black box, is something that naturally results 
>> from
>> the first idea. And could be realized in the form of a specific model 
>> something
>> like: https://github.com/mlpack/models/tree/master/Kaggle/DigitRecognizer 
>> <https://github.com/mlpack/models/tree/master/Kaggle/DigitRecognizer>
>> 
>> I hope anything I said was helpful, let me know if I should clarify anything.
>> 
>> Thanks,
>> Marcus
>> 
>>> On 11. Mar 2018, at 22:23, arjun k <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Hi,
>>> 
>>> I am Arjun, currently pursuing my Master's in Computer Science at the 
>>> University of Massachusetts, Amherst, I came across the project on 
>>> variational autoencoders and Reinforcement learning project and they look 
>>> very interesting. Hope I am not too late. 
>>> 
>>> I am more interested in the reinforcement learning project as it involves 
>>> some research in a field that I am working on and would like to get 
>>> involved. As I understand, coding up an algorithm and implementing it in a 
>>> single game would not be much of an issue. How many algorithms are proposed 
>>> to be benchmarked against each other? Is there any new idea that is being 
>>> tested or the research component is the benchmark alone?
>>> 
>>> In the variational encoders I am quite familiar with generative modeling 
>>> having worked on some research projects 
>>> myself(https://arxiv.org/abs/1802.07401 
>>> <https://arxiv.org/abs/1802.07401>), As we can make variational encoders is 
>>> just a training procedure, how abstracted are you intending the 
>>> implementation to be. Should the framework allow the user to be able to 
>>> customize the underlying neural network and add additional features or is 
>>> it highly abstracted with no control over the underlying architecture and 
>>> only able to use VAE as a black box?
>>> 
>>> Thank you,
>>> Arjun Karuvally,
>>> College of Information and Computer Science,
>>> University of Massachusetts, Amherst.
>>> _______________________________________________
>>> mlpack mailing list
>>> [email protected] <mailto:[email protected]>
>>> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack 
>>> <http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack>
>> 
> 
> 

_______________________________________________
mlpack mailing list
[email protected]
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to