Hi,
I'm new to the list so please forgive my trespasses...
I've nearly completed an implementation of the Extreme Learning Machine (very
fast SLFN with randomly generated hidden units and no iterative tuning) based
on the .14 release for my own use. I'm not sure what state it needs to be in
2013/2/5 David Lambert caliband...@gmail.com:
Hi,
I'm new to the list so please forgive my trespasses...
I've nearly completed an implementation of the Extreme Learning Machine (very
fast SLFN with randomly generated hidden units and no iterative tuning) based
on the .14 release for my
Hi David,
What is a SLFN?
Do you have any pointer to a reference paper?
Best,
Gilles
On 5 February 2013 00:51, David Lambert caliband...@gmail.com wrote:
Hi,
I'm new to the list so please forgive my trespasses...
I've nearly completed an implementation of the Extreme Learning Machine
2013/2/5 Olivier Grisel olivier.gri...@ensta.org:
Am I right to assume that the main reference for ELM is
http://www.ntu.edu.sg/home/egbhuang/ ?
I've tried ELMs once. Apart from the hyped-up name, they're neural
nets with random input-to-hidden weights (a very old idea) and
least-squares
2013/2/5 Lars Buitinck l.j.buiti...@uva.nl:
2013/2/5 Olivier Grisel olivier.gri...@ensta.org:
Am I right to assume that the main reference for ELM is
http://www.ntu.edu.sg/home/egbhuang/ ?
I've tried ELMs once. Apart from the hyped-up name, they're neural
nets with random input-to-hidden
2013/2/5 Olivier Grisel olivier.gri...@ensta.org:
2013/2/5 Lars Buitinck l.j.buiti...@uva.nl:
2013/2/5 Olivier Grisel olivier.gri...@ensta.org:
Am I right to assume that the main reference for ELM is
http://www.ntu.edu.sg/home/egbhuang/ ?
I've tried ELMs once. Apart from the hyped-up name,
Olivier Grisel wrote:
Am I right to assume that the main reference for ELM is
http://www.ntu.edu.sg/home/egbhuang/?
Absolutely.
Lars Buitinck wrote:
I've tried ELMs once. Apart from the hyped-up name, they're neural
nets with random input-to-hidden weights (a very old idea) and
least-squares
2013/2/5 David Lambert caliband...@gmail.com:
Olivier Grisel wrote:
Am I right to assume that the main reference for ELM is
http://www.ntu.edu.sg/home/egbhuang/?
Absolutely.
Lars Buitinck wrote:
I've tried ELMs once. Apart from the hyped-up name, they're neural
nets with random
2013/2/5 Olivier Grisel olivier.gri...@ensta.org:
Actually after reading @larsmans implementation I think we could
indeed start to investigate independently in pure python code and
later think whether it's worth to make it more interoperable with the
cython code of the MLP pull request.
Yes,
On Tue, Feb 05, 2013 at 06:20:03PM +0100, Lars Buitinck wrote:
Yes, it's very simple to implement. The main thing that needs to be
solved is the API. Actually, your remark about stacking something else
than least-squares on top of a random hidden layer made me think:
would it be a good idea to
2013/2/5 Lars Buitinck l.j.buiti...@uva.nl:
2013/2/5 Olivier Grisel olivier.gri...@ensta.org:
Actually after reading @larsmans implementation I think we could
indeed start to investigate independently in pure python code and
later think whether it's worth to make it more interoperable with the
On 2/5/2013 10:24 AM, Olivier Grisel wrote:
Actually after reading @larsmans implementation I think we could
indeed start to investigate independently in pure python code and
later think whether it's worth to make it more interoperable with the
cython code of the MLP pull request.
All right.
2013/2/5 Gael Varoquaux gael.varoqu...@normalesup.org:
Do we need such genericity? Are there real practical gains to such
modularity? My gut feeling would be to move forward with something
simple, and make it performant statistical and computationaly for the
common cases.
Perhaps not, but
On Tue, Feb 05, 2013 at 07:09:15PM +0100, Lars Buitinck wrote:
2013/2/5 Gael Varoquaux gael.varoqu...@normalesup.org:
Do we need such genericity? Are there real practical gains to such
modularity? My gut feeling would be to move forward with something
simple, and make it performant
14 matches
Mail list logo