Hello all,

I am training Tesseract to recognize specific images taken by a cell phone 
camera. I plan to create a new "language" and 2 new fonts for this 
training. In theory, this should be very simple and easy to do, but in fact 
I got lower accuracy with my new .traineddata than with the standard 
eng.traineddata. The more images I used for my training, the lower the 
accuracy I got.

The texts in the images varied in boldness and noise. I've tried correcting 
them with ImageMagick (300 density, black and white). 

<https://lh4.googleusercontent.com/-XacxrUljrKE/U8jApPMb5rI/AAAAAAAAAIY/zAGOKMT_T7s/s1600/ktp.general.exp81.jpg>
 
<https://lh4.googleusercontent.com/-AxytQem9yW0/U8jAf5jwU6I/AAAAAAAAAII/AJA_AKsUVvI/s1600/ktp.general.exp01.jpg>
 
<https://lh4.googleusercontent.com/-XCff6pZhEuk/U8jAihIh8-I/AAAAAAAAAIQ/5WQEYdnS0Ls/s1600/ktp.general.exp31.jpg>
Notice that image in the middle (no.2) has bolder letters than the others. 
The white area is cleared out because of noise.

Here's what I've done:
1. Adding a word-dawg file including the common words in the images.
2. Adding a unicharambigs file including the common mistakes like VV for W
3. Selecting the good letter model. The noisy letters were not included in 
the training.

Please suggest what I should do more to get higher accuracy. Thanks in 
advance.

Regards, 

Victoria

-- 
You received this message because you are subscribed to the Google Groups 
"tesseract-ocr" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/tesseract-ocr.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tesseract-ocr/723c2f3a-f2aa-411e-b99e-8ca1d65d6c66%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to