Hello!

For fine tuning, is there a way to use eval_listfile to monitor the model's 
performance on evaluation fonts? I tried to pass this parameter to 
lstmtraining, but the log does not show any message related to using 
eval_listfile. 

If eval_listfile has no effect on training, other than experimenting 
different models on the evaluation fonts with different target_error_rate 
or max_iterations, or the various checkpoint files saved from training, is 
there a more systematic way that people have been following to prevent 
overfitting? 

I'm following the instruction under "Fine Tuning for Impact" and use the 
following command (except for the last line):

training/lstmtraining --model_output /path/to/output \
  --continue_from /path/to/existing/model \
  --traineddata /path/to/original/traineddata \
  --target_error_rate 0.01 \
  --train_listfile /path/to/list/of/filenames.txt \
  --eval_listfile /path/to/eval_list/of/filenames.txt 


Thanks,

Joan

-- 
You received this message because you are subscribed to the Google Groups 
"tesseract-ocr" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/tesseract-ocr.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tesseract-ocr/25070e70-b7d1-45f0-a4d8-d3b4fa7e213a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to