Hi all, I am Yihan Wang, a final year student from Tsinghua University, with more than a year's research experience in machine learning algorithms. I am interested in participating in this year's GSoC. In particular I am interested in these two topics.
* Enhance CMA-ES I have began to check the references listed, and I have a question related to the current mlpack. Currently is there an original CMA-ES algorithm in the mlpack? If there is none, I can begin from the original implementation. * Implement the Transformer in mlpack I think what we need to do is first implement an attention layer and then the transformer itself. For testing, we can compare the result with results got from pytorch or so. Is there any suggestion related to these two ideas? Best, Yihan
_______________________________________________ mlpack mailing list [email protected] http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
