Correct, I scan each page multiple times.
The idea is to align the scans, combine them in a way that makes use of the
added redundancy to reduce noise and speckles.
I am pretty open about the combining algorithm. Average, median, something
else, actually I do not care *that* much as long as the results are better.
My current (pretty incomplete) knowledge indicates that median might give
more accurate results, but I'm willing to experiment here.
The background is that I'm scanning my books, for going paperless. Well,
paper-frugal, some books will stay :-)
The scanning will be destructive. I want/need to shed the weight and volume
of all that paper.
I cannot go crazy with storage, the NAS size is somewhat limited. 300 dpi
TIFF, compressed with the right PNG settings, will fit. 600 dpi will fit
only if I do lossy compression, and I suspect it's not going to have any
more information than the 300dpi image so I didn't pursue that option
I also want to keep enough information that if the future comes with
improved OCR software, I can take the scans and redo the OCR.
Since lossy compression might throw away exactly those bits of information
that an improved OCR would exploit, I am somewhat inclined towards lossless
compressors. Good thing that if I scan with 300 dpi and run PNG over the
scan, I'll be able to fit everything on the NAS.
(Sorry for being vague before - if I start with the full specs, nobody
reads that wall of text and I get zero answers... no idea how to do that
A list of frequently asked questions is available at:
You received this message because you are subscribed to the Google Groups
"hugin and other free panoramic software" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To view this discussion on the web visit
For more options, visit https://groups.google.com/d/optout.