'll leave the rest to
> the user.
>
> Paulo
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On
> > Behalf Of Massimiliano Ziccardi
> > Sent: Thursday, June 19, 2008 4:01 PM
> > To: [email protected]
[mailto:[EMAIL PROTECTED] On
> Behalf Of Massimiliano Ziccardi
> Sent: Thursday, June 19, 2008 4:01 PM
> To: [email protected]
> Subject: Re: [iText-questions] Problem parsing huge PDF files
>
> Hi Paulo.
>
> I solved the problem.
> I've looke
Hi Paulo.
I solved the problem.
I've looked through the iText sources and I've found the problem.
The PDF file I've to parse, is extracted from an Oracle BLOB, so the
RandomAccessFileOrArray received an InputStream
as input.
Looking through the RandomAccessFileOrArray class, I've discovered that
The problem is that you have a lot of objects and even though iText only
stores the xref it is still a few million elements to store. It could be
possible to have this in file instead of memory but that's not how
things work now.
Paulo
> -Original Message-
> From: [EMAIL PROTECTED]
> [ma
Paulo Soares wrote:
> Big PDFs will always require more memory, no miracles here.
And maybe Adobe Reader has access to more memory on your machine,
than the amount of memory you allow your JVM to use.
> PdfReader(RandomAccessFileOrArray,byte[])
Even for small files, there's already a significant
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On
> Behalf Of Massimiliano Ziccardi
> Sent: Thursday, June 19, 2008 11:51 AM
> To: [email protected]
> Subject: [iText-questions] Problem parsing huge PDF files
>
Hi all.
I have a bug problem. I need to parse very big files (>800 MB) with lots of
pages (> 400.000 pages).
When I open it with Adobe Acrobat Reader, I've no problem: the file is
opened in less than 1 second, and
the reader is able to show me the page count in a while.
If I try to instantiate a