Pity that it is in C++.  Otherwise looks like they have made significant
progress since I last looked at their stuff.

It is also somewhat complementary to what we have.  In our SGD logistic
regression code, for instance, I have pushed
forward with automatic hyper-parameter tuning, on-line performance metrics
and confidence based learning while they
have pushed to Pegaso style updates and more perceptron algorithms.

Their focus on parsing speed is definitely right on the mark.  In off-list
work, I have built some fast parsing code and
there is a huge benefit to be had, somewhat moderated when you are running
dozens to hundreds of classifiers from
the same input as with the on-line hyper-parameter tuning.

2010/8/28 Gérard Dupont <[email protected]>

> http://code.google.com/p/sofia-ml/
>
> On Sat, Aug 28, 2010 at 08:49, Robin Anil <[email protected]> wrote:
>
> > Wrong List. Bah humbug!
> >
> >
> > On Sat, Aug 28, 2010 at 12:18 PM, Robin Anil <[email protected]>
> wrote:
> >
> > > *http://code.google.com/p/sofia-ml/*
> > > *
> > > *
> > > Ted, Zhao. Both of you might want to take a look at this
> > >
> > > Robin
> > >
> >
>
>
>
> --
> Gérard Dupont
> Information Processing Control and Cognition (IPCC) - EADS DS
> http://weblab.ow2.org
>
> Document & Learning team - LITIS Laboratory
>

Reply via email to