Trying to extract text from scanned paper forms containing logos, lines, 
boxes, and text. From what I've read, I expected Tesseract to segment the 
page and classify each element. I tried TessBaseAPI::SetPageSegMode () with 
PSM_AUTO, PSM_AUTO_OSD and a few others, followed by 
TessBaseAPI::AnalyseLayout (), but all I get is a single PT_FLOWING_IMAGE 
block representing the whole page.

However, if I REMOVE the logo from the form, I then get PT_FLOWING_TEXT, 
PT_HORZ_LINE, and PT_VERT_LINE blocks, and TessBaseAPI::Recognize does a 
fairly good job recognizing the text, even though the text is not in 
contiguous blocks and is interspersed among the lines.

I have seen examples online of Tesseract segmenting a page and separately 
identifying blocks of text and graphics, but I cannot remember where.

So, I am looking for information and advice on how to have Tesseract 
accurately segment a form that includes images and accurately recognize 
text interspersed among lines and boxes.

Thank you.

-- 
You received this message because you are subscribed to the Google Groups 
"tesseract-ocr" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/tesseract-ocr.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tesseract-ocr/63cb5a8a-3a5b-478b-a73a-8b946367b8b0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to