Thank you, Andreas! On Fri, Apr 9, 2021 at 2:24 AM Andreas Lehmkuehler <[email protected]> wrote: > > Hi, > > Am 08.04.21 um 23:33 schrieb Tim Allison: > > Hi All, > > Two questions on 3.x... do we get the benefits of the on demand > > parser "for free" in 3.x...in short, eo we have to load/parse the > > document in a special way for the (hopefully) lower memory profile? > The on demand feature is the only way to parse and therefore it is default. > > One can use different ways to read the input file which will have an impact on > the memory consumption: buffered file, input stream and memory mapped file > > > Second question: will the on demand parser be more memory efficient > > for page splitting? > The current implementation is the first stage of the on demand parser. It > reads > only those parts which are needed but it doesn't drop parts which aren't > needed > anymore. Saying that, if your interest in limited to parts of a pdf (one page > out of 100) or to a single aspect of a pdf (text extraction instead of > rendering) it should have a lower memory footprint, but if you have to parse > the > whole document you won't save any memory resources (maybe there are some minor > improvements due to the many refactorings, but I don't expect a huge impact) > Back to your question, it depends on your goal. Splirting a pdf into single > pages won't most likely save any resources, save a handful of pages of a hugs > pdf should have a lower footprint. > > There is still a lot of room for improvements :-) > > Andreas > > > Thank you! > > > > Cheers, > > > > Tim > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: [email protected] > > For additional commands, e-mail: [email protected] > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] > For additional commands, e-mail: [email protected] >
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
