Gokhan

On Thu, Nov 28, 2013 at 3:18 AM, Ted Dunning <ted.dunn...@gmail.com> wrote:

> On Wed, Nov 27, 2013 at 7:07 AM, Vishal Santoshi <
> vishal.santo...@gmail.com>
>
> >
> >
> > Are we to assume that SGD is still a work in progress and
> implementations (
> > Cross Fold, Online, Adaptive ) are too flawed to be realistically used ?
> >
>
> They are too raw to be accepted uncritically, for sure.  They have been
> used successfully in production.
>
>
> > The evolutionary algorithm seems to be the core of
> > OnlineLogisticRegression,
> > which in turn builds up to Adaptive/Cross Fold.
> >
> > >>b) for truly on-line learning where no repeated passes through the
> data..
> >
> > What would it take to get to an implementation ? How can any one help ?
> >
>
> Would you like to help on this?  The amount of work required to get a
> distributed asynchronous learner up is moderate, but definitely not huge.
>

Ted, do you describe a generic distributed learner for all kinds of online
algorithms? Possibly zookeeper-coordinated and with #predict and
#getFeedbackAndUpdateTheModel methods?

>
> I think that OnlineLogisticRegression is basically sound, but should get a
> better learning rate update equation.  That would largely make the
> Adaptive* stuff unnecessary, expecially if OLR could be used in the
> distributed asynchronous learner.
>

Reply via email to