Hi all,
nice to hear about another one OpenOpt application.
> For small non linear problems having an exact SVM/SVR solver
> (not approximated) is very useful IMHO.
I'm not sure what does this mean "For small non linear problems having
an exact SVM/SVR solver (not approximated) is very useful IM
On Fri, Sep 28, 2012 at 3:48 PM, Mathieu Blondel wrote:
> # If you do subgradient descent, you can use non-smooth losses. In the
> paper I mentioned, the author is using Newton's method, which is why he's
> using differentiable losses.
>
Exactly, In fact ralg supports non smooth functions [1] via
On Fri, Sep 28, 2012 at 10:36 PM, Paolo Losi wrote:
>
> My openopt experimentation was motivated exactly by that paper.
>
Interesting! I hadn't read your source code so I was assuming you were
solving a QP :)
# If you do subgradient descent, you can use non-smooth losses. In the
paper I mention
Hi Mathieu,
On Fri, Sep 28, 2012 at 3:16 PM, Mathieu Blondel wrote:
> If you can afford to store the entire kernel matrix in memory, "training
> support vector machines in the primal" [*] seems like the way to go for me.
My openopt experimentation was motivated exactly by that paper.
The reaso
On Fri, Sep 28, 2012 at 11:37 AM, federico vaggi
wrote:
> I would be very interested.
Here you have the gist federico:
https://gist.github.com/3799831
Paolo
--
Got visibility?
Most devs has no idea what their productio
2012/9/26 Andreas Mueller :
>
> Can you give some insights into why this check is necessary and in
> what kind of situations LibSVM fails to converge? I guess it uses
> the duality gap for convergence. Is is the case that this is not
> a good measure sometimes?
I guess this user on stackoverflow w
If you can afford to store the entire kernel matrix in memory, "training
support vector machines in the primal" [*] seems like the way to go for me.
It's really easy to implement in Python + Numpy (OpenOPT cannot be added to
scikit-learn). It's restricted to the squared hinge loss (what Lin et al.
On Fri, Sep 28, 2012 at 2:54 PM, Andreas Mueller
wrote:
> Am 28.09.2012 14:50, schrieb Paolo Losi:
>
> Hi Olivier,
>
> On Fri, Sep 28, 2012 at 2:28 PM, Olivier Grisel
> wrote:
>
>> What about the memory usage? Do you need to precompute the kernel
>> matrix in advance or do you use some LRU cache
On Fri, Sep 28, 2012 at 2:32 PM, Andreas Mueller
wrote:
> Dear All.
> Please put on sunglasses before opening the openopt webpage.
>
:-)
Also: I think the way forward with SVMs is using low rank approximations of
> the kernel matrix.
>
For "small" datasets, SMO or the version in LASVM seem to
Am 28.09.2012 14:50, schrieb Paolo Losi:
Hi Olivier,
On Fri, Sep 28, 2012 at 2:28 PM, Olivier Grisel
mailto:[email protected]>> wrote:
What about the memory usage? Do you need to precompute the kernel
matrix in advance or do you use some LRU cache for columns as in
libsvm?
Hi Olivier,
On Fri, Sep 28, 2012 at 2:28 PM, Olivier Grisel wrote:
> What about the memory usage? Do you need to precompute the kernel
> matrix in advance or do you use some LRU cache for columns as in
> libsvm?
>
unlike libsvm I definitely precompute the kernel matrix.
Is it the same scalabili
Am 28.09.2012 11:50, schrieb Olivier Grisel:
> I have no good answer what to reply to:
>
> http://stackoverflow.com/questions/12636842/shift-invariant-sparse-coding-in-scikit-learn
>
> Anybody knows whether it would be complicated to implement this? Maybe
> by deriving existing classes of sklearn?
Dear All.
Please put on sunglasses before opening the openopt webpage.
Also: I think the way forward with SVMs is using low rank approximations
of the kernel matrix.
For "small" datasets, SMO or the version in LASVM seem to work very well
imho.
Cheers,
Andy
Am 28.09.2012 10:53, schrieb Paol
What about the memory usage? Do you need to precompute the kernel
matrix in advance or do you use some LRU cache for columns as in
libsvm?
Is it the same scalabilité w.r.t. n_samples as libsvm?
--
Olivier
--
Got visibil
Hi Christian.
Are you thinking about 1d or 2d convolutions?
I am not so familiar with 1d signal processing but there has
been some work on convolutional sparse coding for image patches.
This is not really planned for sklearn, afaik, though.
In computer vision, I think there was no big difference in
I have no good answer what to reply to:
http://stackoverflow.com/questions/12636842/shift-invariant-sparse-coding-in-scikit-learn
Anybody knows whether it would be complicated to implement this? Maybe
by deriving existing classes of sklearn?
Any pointers in the literature on good implementation
I would be very interested. OpenOpt looks very good, it just has patchy
documentation, so some well commented examples would be welcome. Perhaps
offer to share the documentation with OpenOpt's developer?
Federico
On Fri, Sep 28, 2012 at 10:53 AM, Paolo Losi wrote:
> Hi all,
>
> I'm following
Hi all,
I'm following the thread about libsvm...
I just wanted to share some impressive result I got by solving svm with
OpenOpt [1].
My main use case was to try different loss functions for regression (libsvm
only
provides epsilon insensitive).
In a couple of hours I succeeded in implementing
Hello,
there is a nice collection of sparse coding and dictionary algorithms
implemented in scikit-learn. However, it seems there are no
shift-invariant implementations. Are there plans to include any
shift-invariant implementations or is there a way to apply the
implemented algorithms in a sh
19 matches
Mail list logo