Dear community, Currently I wanna OCR on a relative large image data sets (more than 1 million images). Is there a way to reduce the computational time, i.e. cost $$$$$.
My setting is 280dpi A4 page, oem will be LSTM. I cannot really find any similar topic and I think the options could be limited. The following will be some points I got from the GitHub and here: 1) Build tesseract with --disable-openmp 2) Build tesseract with --disable-static and CXXFLAGS=-Wall -g O2 I plan to run it on cloud (AWS, M$, Google) with docker. Most likely on Dual Core Instance but if 4 cores will help then I will definitely look into that. Have not tried to implement the C++ function coz I am not sure about the performance gain. I think the on/off initialisation from calling Tesseract for each image may slow down the whole process. Do you all think implementing the process with C++ will help my case? Thank you in advance. -- You received this message because you are subscribed to the Google Groups "tesseract-ocr" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/tesseract-ocr. To view this discussion on the web visit https://groups.google.com/d/msgid/tesseract-ocr/2ab69c70-f171-4700-bbac-023f505713e9%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.

