The output for the first many line is all zeroes because the evolutionary hyper 
parameter tuning code buffers a few thousand examples before passing them to 
the actual learning algorithm. 

Once that passes the output consists of lines that indicate the progress of the 
learning algorithm. Of most interest is the number of training examples which 
goes up to 10000 and the percent correct which is inthe ramge 0 to 100 and the 
log likelihood which is negative in the range from -3 to 0 where zero is good. 

After the training, there is a dump of important variables in the model and 
then some feature counts. These outputs should be cleaned up and made more 
understandable but my list is getting long enough I can't promise when I will 
make this better. 

Sent from my iPhone

On Oct 10, 2010, at 4:39 PM, Joe Kumar <[email protected]> wrote:

> I couldnt understand how to interpret the output and am trying to see where
> I could get more info on the basics of sgd. Any help regarding this would be
> great.

Reply via email to