I am currently using Tesseract to recognize characters from photos of water meters. I've used a variety of methods to extract a clean, black and white line of numbers from the photo and I've trained a new character set that is comprised of images of the dials. The biggest issue I'd say is not training Tesseract to recognize the numbers, but rather being able to consistently produce/extract clear numbers from a photo. (If you're interested, I currently have a recognition success rate of >90% )
-- You received this message because you are subscribed to the Google Groups "tesseract-ocr" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/tesseract-ocr?hl=en.

