This really looks like a random projection followed by something like
regularized regression.

It is not news that many applications of neural nets don't need multiple
layers, especially in large systems.  Likewise, it isn't news that random
project preserves approximate metrics and thus allows learning.

I don't see anything new here at all.  Could be I am missing the point.



On Tue, Apr 30, 2013 at 3:05 AM, Louis Hénault <[email protected]>wrote:

> Hi everybody,
>
> Many people are trying to integrate SVM to Mahout. I can understand since
> SVM are really efficient in a "small data" context.
> But, as you may know, SVM has:
> -a slow learning speed
> -a poor learning scalability
>
> In contrast, ELM give results which are usually at least as good as SVM's
> and are something like 1000x faster.
> So, why not trying to work on this topic?
>
> (Sorry if someone already talked about it, I'm new on this mailing and did
> not find anything after some researches)
>
> Regards
>

Reply via email to