Thanks Shree!

On Saturday, March 9, 2019 at 9:53:52 PM UTC-8, [email protected] wrote:
>
> In the ocrd-train Makefile, here is the code for finetuning
>
>
> ifdef START_MODEL
> $(LAST_CHECKPOINT): unicharset lists $(PROTO_MODEL)
> mkdir -p data/checkpoints
> lstmtraining 
> --traineddata $(PROTO_MODEL) 
> --old_traineddata $(TESSDATA)/$(START_MODEL).traineddata 
> --continue_from data/$(START_MODEL)/$(START_MODEL).lstm 
> --net_spec "[1,36,0,1 Ct3,3,16 Mp3,3 Lfys48 Lfx96 Lrx96 Lfx256 O1chead 
> -n1 data/unicharset]" 
> --model_output data/checkpoints/$(MODEL_NAME) 
> --learning_rate 20e-4 
> --train_listfile data/list.train 
> --eval_listfile data/list.eval 
> --max_iterations 10000
>
>
> Why do we need the following line? I thought it was only used in training 
> from scratch.
> --net_spec "[1,36,0,1 Ct3,3,16 Mp3,3 Lfys48 Lfx96 Lrx96 Lfx256 O1chead 
> -n1 data/unicharset]" \
>
>
> Should the learning rate be set lower for fine-tuning? The learning rate 
> for training from scratch is 20e-4, so it would seem that the learning rate 
> for fine-tuning should be significantly lower?
> --learning_rate 20e-4 \
>

-- 
You received this message because you are subscribed to the Google Groups 
"tesseract-ocr" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/tesseract-ocr.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tesseract-ocr/68a226f2-3eeb-49f0-a83d-4f5921a90036%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to