Joseph Turian writes:
>
> >> Can anyone compare Theano and openopt for automatic differentiation?
> >
> > I guess I could, but it will hardly be objective been done by its
developer.
>
> I don't mind a biased perspective, please offer it to give your
> feedback on what you emphasized.
I
Joseph Turian writes:
>
> >> Can anyone compare Theano and openopt for automatic differentiation?
> >
> > I guess I could, but it will hardly be objective been done by its developer.
>
> I don't mind a biased perspective, please offer it to give your
> feedback on what you emphasized.
I don'
>> Can anyone compare Theano and openopt for automatic differentiation?
>
> I guess I could, but it will hardly be objective been done by its developer.
I don't mind a biased perspective, please offer it to give your
feedback on what you emphasized.
---
Joseph Turian writes:
>
> Can anyone compare Theano and openopt for automatic differentiation?
I guess I could, but it will hardly be objective been done by its developer.
Sebastian Walter had compared some Python AD tools
http://forum.openopt.org/viewtopic.php?id=316 , mb you could ask him.
There's also the excellent uncertainties package
(http://pypi.python.org/pypi/uncertainties/) for implicit automatic
differentiation.
On Mon, Oct 1, 2012 at 7:44 AM, Joseph Turian wrote:
> Can anyone compare Theano and openopt for automatic differentiation?
>
> On Fri, Sep 28, 2012 at 2:36 PM, Dm
Can anyone compare Theano and openopt for automatic differentiation?
On Fri, Sep 28, 2012 at 2:36 PM, Dmitrey wrote:
> Hi all,
> nice to hear about another one OpenOpt application.
>
>> For small non linear problems having an exact SVM/SVR solver
>> (not approximated) is very useful IMHO.
>
> I'm
Paolo Losi writes:
>
> I'm looking forward to the new enhancement. Have you got any links about them?
>
> Paolo
He informed me about the enhancement via phone. In internet currently I found
only one link
http://www.aticmd.md/wp-content/uploads/2012/03/A_42_Jurbenco_2_.pdf, but:
* At first you
Hi Dmitrey,
On Fri, Sep 28, 2012 at 8:36 PM, Dmitrey wrote:
>
> > For small non linear problems having an exact SVM/SVR solver
> > (not approximated) is very useful IMHO.
>
> I'm not sure what does this mean "For small non linear problems having
> an exact SVM/SVR solver (not approximated) is ve
Hi all,
nice to hear about another one OpenOpt application.
> For small non linear problems having an exact SVM/SVR solver
> (not approximated) is very useful IMHO.
I'm not sure what does this mean "For small non linear problems having
an exact SVM/SVR solver (not approximated) is very useful IM
On Fri, Sep 28, 2012 at 3:48 PM, Mathieu Blondel wrote:
> # If you do subgradient descent, you can use non-smooth losses. In the
> paper I mentioned, the author is using Newton's method, which is why he's
> using differentiable losses.
>
Exactly, In fact ralg supports non smooth functions [1] via
On Fri, Sep 28, 2012 at 10:36 PM, Paolo Losi wrote:
>
> My openopt experimentation was motivated exactly by that paper.
>
Interesting! I hadn't read your source code so I was assuming you were
solving a QP :)
# If you do subgradient descent, you can use non-smooth losses. In the
paper I mention
Hi Mathieu,
On Fri, Sep 28, 2012 at 3:16 PM, Mathieu Blondel wrote:
> If you can afford to store the entire kernel matrix in memory, "training
> support vector machines in the primal" [*] seems like the way to go for me.
My openopt experimentation was motivated exactly by that paper.
The reaso
On Fri, Sep 28, 2012 at 11:37 AM, federico vaggi
wrote:
> I would be very interested.
Here you have the gist federico:
https://gist.github.com/3799831
Paolo
--
Got visibility?
Most devs has no idea what their productio
If you can afford to store the entire kernel matrix in memory, "training
support vector machines in the primal" [*] seems like the way to go for me.
It's really easy to implement in Python + Numpy (OpenOPT cannot be added to
scikit-learn). It's restricted to the squared hinge loss (what Lin et al.
On Fri, Sep 28, 2012 at 2:54 PM, Andreas Mueller
wrote:
> Am 28.09.2012 14:50, schrieb Paolo Losi:
>
> Hi Olivier,
>
> On Fri, Sep 28, 2012 at 2:28 PM, Olivier Grisel
> wrote:
>
>> What about the memory usage? Do you need to precompute the kernel
>> matrix in advance or do you use some LRU cache
On Fri, Sep 28, 2012 at 2:32 PM, Andreas Mueller
wrote:
> Dear All.
> Please put on sunglasses before opening the openopt webpage.
>
:-)
Also: I think the way forward with SVMs is using low rank approximations of
> the kernel matrix.
>
For "small" datasets, SMO or the version in LASVM seem to
Am 28.09.2012 14:50, schrieb Paolo Losi:
Hi Olivier,
On Fri, Sep 28, 2012 at 2:28 PM, Olivier Grisel
mailto:[email protected]>> wrote:
What about the memory usage? Do you need to precompute the kernel
matrix in advance or do you use some LRU cache for columns as in
libsvm?
Hi Olivier,
On Fri, Sep 28, 2012 at 2:28 PM, Olivier Grisel wrote:
> What about the memory usage? Do you need to precompute the kernel
> matrix in advance or do you use some LRU cache for columns as in
> libsvm?
>
unlike libsvm I definitely precompute the kernel matrix.
Is it the same scalabili
Dear All.
Please put on sunglasses before opening the openopt webpage.
Also: I think the way forward with SVMs is using low rank approximations
of the kernel matrix.
For "small" datasets, SMO or the version in LASVM seem to work very well
imho.
Cheers,
Andy
Am 28.09.2012 10:53, schrieb Paol
What about the memory usage? Do you need to precompute the kernel
matrix in advance or do you use some LRU cache for columns as in
libsvm?
Is it the same scalabilité w.r.t. n_samples as libsvm?
--
Olivier
--
Got visibil
I would be very interested. OpenOpt looks very good, it just has patchy
documentation, so some well commented examples would be welcome. Perhaps
offer to share the documentation with OpenOpt's developer?
Federico
On Fri, Sep 28, 2012 at 10:53 AM, Paolo Losi wrote:
> Hi all,
>
> I'm following
Hi all,
I'm following the thread about libsvm...
I just wanted to share some impressive result I got by solving svm with
OpenOpt [1].
My main use case was to try different loss functions for regression (libsvm
only
provides epsilon insensitive).
In a couple of hours I succeeded in implementing
22 matches
Mail list logo