If you care to work on it, you should work on it. Implementations exist or don't exist because someone created it, or nobody was interested in creating it.
I have never heard of 'extreme learning' and found this summary: http://www.slideshare.net/formatc666/extreme-learning-machinetheory-and-applications If it's accurate, this is just describing a single hidden layer model trained with back propagation. I don't see what's new? the part about learning the beta weights is simple linear algebra. If it's just a hidden layer model, it's not necessarily better than SVMs, no. On Tue, Apr 30, 2013 at 11:05 AM, Louis Hénault <[email protected]> wrote: > Hi everybody, > > Many people are trying to integrate SVM to Mahout. I can understand since > SVM are really efficient in a "small data" context. > But, as you may know, SVM has: > -a slow learning speed > -a poor learning scalability > > In contrast, ELM give results which are usually at least as good as SVM's > and are something like 1000x faster. > So, why not trying to work on this topic? > > (Sorry if someone already talked about it, I'm new on this mailing and did > not find anything after some researches) > > Regards
