Jake, I would appreciate your comments on this, especially in light of any duplication.
David, If you have any time, your comments are always very welcome as well. On Wed, Dec 23, 2009 at 12:50 PM, Ted Dunning (JIRA) <j...@apache.org>wrote: > > [ > https://issues.apache.org/jira/browse/MAHOUT-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel] > > Ted Dunning updated MAHOUT-228: > ------------------------------- > > Fix Version/s: 0.3 > Status: Patch Available (was: Open) > > Here is an early implementation. The learning has been implemented, but > not tested. Most other aspects are reasonably well tested. > > > Need sequential logistic regression implementation using SGD techniques > > ----------------------------------------------------------------------- > > > > Key: MAHOUT-228 > > URL: https://issues.apache.org/jira/browse/MAHOUT-228 > > Project: Mahout > > Issue Type: New Feature > > Components: Classification > > Reporter: Ted Dunning > > Fix For: 0.3 > > > > > > Stochastic gradient descent (SGD) is often fast enough for highly > scalable learning (see Vowpal Wabbit, > http://hunch.net/~vw/<http://hunch.net/%7Evw/> > ). > > I often need to have a logistic regression in Java as well, so that is a > reasonable place to start. > > -- > This message is automatically generated by JIRA. > - > You can reply to this email to add a comment to the issue online. > > -- Ted Dunning, CTO DeepDyve