It seems like a great idea to me. I would love to co-work on this feature. Also could you check the file link, it does not seem to be opening the document.
Regards Siddharth Gupta, Ph: 9871012292 Linkedin <https://www.linkedin.com/in/sidgupta234/> | Github <https://github.com/sidgupta234> | Codechef <https://www.codechef.com/users/sidgupta234> | Twitter <https://twitter.com/SidGupta234> | Facebook <https://www.facebook.com/profile.php?id=1483695876> On Sun, Sep 25, 2016 at 3:14 PM, 深谷亮祐 <nannyakan...@gmail.com> wrote: > Hi everyone, > > My name is Ryosuke Fukatani. > I'm joining a scikit-learn community and really excited to work with all. > > Today I propose new feature "maxout activation" for MLP. > Maxout activation achived high performance > classification.(http://www.jmlr.org/proceedings/papers/ > v28/goodfellow13.pdf). > > And it seemed that only a few layers are needed to use maxout. > I think it is suitable feature for light weight MLP by scikit learn. > > If it's OK, I'd work on it. > Best regards > _______________________________________________ > scikit-learn mailing list > scikit-learn@python.org > https://mail.python.org/mailman/listinfo/scikit-learn >
_______________________________________________ scikit-learn mailing list scikit-learn@python.org https://mail.python.org/mailman/listinfo/scikit-learn