Hi all,

I've been playing with integrating Tesseract with Ghostscript for the past 
couple of weeks. I have it working nicely, and I've started trying to pass 
back some of the tweaks I've done along the way as pull requests on github 
- thanks to stweil for bearing with me!

The biggest of these is an implementation of matrixDotVector for NEON 
equipped ARMs (intsimdmatrixneon.cpp). This makes a massive difference to 
the speed on ARM devices (such as my Raspberry pi), but (depending on what 
language data I feed it), profiles still show 30-55% of runtime still in 
this function.

It'd be nice to do the whole thing in NEON, but NEON doesn't support 
doubles, so we have to drop back to "standard" operations to add the 
biases/apply the scales. If we were using floats, we'd be golden.

It's possible that the calling code could be tweaked to use floats instead 
of doubles. So, before I dive into this, I thought I'd ask here. 
Presumably, there is a good reason why the existing code uses doubles 
rather than floats?

Am I doomed to damage the quality of the results I get out by moving to 
floats?

Thanks in advance for any help/insight people can offer.

-- 
You received this message because you are subscribed to the Google Groups 
"tesseract-ocr" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to tesseract-ocr+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tesseract-ocr/0e4552a1-a0d3-4a38-8881-0972bdf3185b%40googlegroups.com.

Reply via email to