2013/2/5 David Lambert <caliband...@gmail.com>:
> Hi,
>
> I'm new to the list so please forgive my trespasses...
>
> I've nearly completed an implementation of the Extreme Learning Machine (very 
> fast SLFN with randomly generated hidden units and no iterative tuning) based 
> on the .14 release for my own use.  I'm not sure what state it needs to be in 
> before I try to integrate it into the package and submit a pull request, and 
> what level of interest there is, if any.

Hi David,

Am I right to assume that the main reference for ELM is
http://www.ntu.edu.sg/home/egbhuang/ ?

I had never heard of that term but it seems to share a lot of the
ideas of random kitchen sinks and kernel approximations:

http://berkeley.intel-research.net/arahimi/random-features/

http://scikit-learn.org/dev/modules/kernel_approximation.html

Do you have any reference that compares both approaches on non toy datasets?

Also there is ongoing work to implement multi layer perceptron:

https://github.com/scikit-learn/scikit-learn/pull/1653

I guess that we won't integrate any new work that is a randomized
version of MLP before this PR is first done implementing the
traditional MLP learning.

--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel

------------------------------------------------------------------------------
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to