Btw, for people that are interested in the subject, I just found
this paper:
http://arxiv.org/pdf/1109.4603
It approximates the rbf kernel using taylor expansions and
seems to work better than the Rahimi and Recht method
on sparse datasets.
My sparse datasets are not sparse, so I didn't really look
2011/11/14 Andreas Müller :
>
> If you're still interested in MNIST results:
> using gamma=0.03 and C=1 I get 0.9845 with SVC in 12 minutes
> (20GB kernel cache, don't know how much was used),
> with the same parameters on LinearSVC and 5000 sampled features
> I get 0.9783 in ~3 minutes.
> Going up
On 11/11/2011 04:45 PM, Andreas Müller wrote:
> On 11/11/2011 04:38 PM, Olivier Grisel wrote:
>> 2011/11/11 Gael Varoquaux:
>>> On Fri, Nov 11, 2011 at 04:11:46PM +0100, Andreas Müller wrote:
> If you find that it does work/is useful on real problem, yes!
I just started working on it. Atm
On 11/11/2011 04:38 PM, Olivier Grisel wrote:
> 2011/11/11 Gael Varoquaux:
>> On Fri, Nov 11, 2011 at 04:11:46PM +0100, Andreas Müller wrote:
If you find that it does work/is useful on real problem, yes!
>>> I just started working on it. Atm I can get 3% error on MNIST using
>>> sklearn's SGD.
2011/11/11 Gael Varoquaux :
> On Fri, Nov 11, 2011 at 04:11:46PM +0100, Andreas Müller wrote:
>> > If you find that it does work/is useful on real problem, yes!
>> I just started working on it. Atm I can get 3% error on MNIST using
>> sklearn's SGD.
>
> Does sound good.
Doesn't a grid-searched gau
On Fri, Nov 11, 2011 at 04:11:46PM +0100, Andreas Müller wrote:
> > If you find that it does work/is useful on real problem, yes!
> I just started working on it. Atm I can get 3% error on MNIST using
> sklearn's SGD.
Does sound good.
I find that one of the values of the scikit, and in particular
On 11/11/2011 04:04 PM, Gael Varoquaux wrote:
> On Fri, Nov 11, 2011 at 04:00:54PM +0100, Andreas Müller wrote:
>> I just implemented the paper "Random Features for Large-Scale Kernel
>> Machines".
> The Rahimi and Recht one :). It's been on my desktop, waiting for
> implementation for something li
On Fri, Nov 11, 2011 at 04:00:54PM +0100, Andreas Müller wrote:
> I just implemented the paper "Random Features for Large-Scale Kernel
> Machines".
The Rahimi and Recht one :). It's been on my desktop, waiting for
implementation for something like a year.
> I was wondering whether this would be i
Hey everybody.
I just implemented the paper "Random Features for Large-Scale Kernel
Machines".
It proposes to use Monte Carlo approximations to the kernel mapping to
make explicit
kernel maps possible.
This is awesome since explicit kernel maps make it possible to use SGD
classifiers easily.
I was