Hello Yihan,

> * Enhance CMA-ES I have began to check the references listed, and I have a
> question related to the current mlpack. Currently is there an original CMA-ES
> algorithm in the mlpack? If there is none, I can begin from the original
> implementation.

All mlpack optimizers are in another repository including the CMA-ES optimizer:
https://github.com/mlpack/ensmallen and
https://github.com/mlpack/ensmallen/tree/master/include/ensmallen_bits/cmaes.

> * Implement the Transformer in mlpack I think what we need to do is first
> implement an attention layer and then the transformer itself. For testing, we
> can compare the result with results got from pytorch or so.

Agreed, mlpack doesn't implement an attention layer.

Let me know if I should clarify anything.

Thanks,
Marcus

> On 8. Mar 2020, at 07:54, Yihan Wang <[email protected]> wrote:
> 
> Hi all,
> 
> I am Yihan Wang, a final year student from Tsinghua University, with more 
> than a year's research experience in machine learning algorithms. I am 
> interested in participating in this year's GSoC. In particular I am 
> interested in these two topics.
> 
> * Enhance CMA-ES
> I have began to check the references listed, and I have a question related to 
> the current mlpack. Currently is there an original CMA-ES algorithm in the 
> mlpack? If there is none, I can begin from the original implementation.
> 
> * Implement the Transformer in mlpack
> I think what we need to do is first implement an attention layer and then the 
> transformer itself. For testing, we can compare the result with results got 
> from pytorch or so.
> 
> Is there any suggestion related to these two ideas?
> 
> Best,
> Yihan
> _______________________________________________
> mlpack mailing list
> [email protected]
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

_______________________________________________
mlpack mailing list
[email protected]
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to