Sure ... I put together a directory with my source code (Test only), the
source file, and even a dos batch file with the heap set to 500k.  I have
uploaded it with this message.

Paulo Soares wrote:
> 
> The tests I made didn't indicate any leaks. As I said before, I
> concatenated 1000 files each with 500k using a heap space of 64M. Can
> you send me a couple of your files?
> 
> Paulo 
> 
>> -----Original Message-----
>> From: [EMAIL PROTECTED] 
>> [mailto:[EMAIL PROTECTED] On 
>> Behalf Of dhyton
>> Sent: Wednesday, October 10, 2007 1:40 PM
>> To: itext-questions@lists.sourceforge.net
>> Subject: Re: [iText-questions] Out of memory when 
>> concatenating -- I haveread the previous threads
>> 
>> 
>> Thank you for the suggestion but I still have the same memory 
>> problems. See
>> new code below.  Is it possible to get you to run the code through a
>> profiler, so you can see the growth of the heap?
>> 
>> I really need to resolve this ... is it possible to pay for support?
>> 
>> public static void main(String[] args) throws IOException, 
>> DocumentException
>>     {
>>         //PresentationEngine presentationEngine = new 
>> PresentationEngine();
>>         //presentationEngine.testMuliThread();
>> 
>>         FileOutputStream out = new
>> FileOutputStream("c:/reporttemp/outfile.pdf");
>>         ArrayList<String> list = new ArrayList<String>();
>>         for(int i=0;i<100;i++)
>>             list.add("00db01d80b340d87a0");
>>         File jobDir=new File("c:/newTemp");
>>         Rectangle pageSize = PageSize.LETTER.rotate();
>>         Document allHHDocument = new Document(pageSize, 0, 0, 0, 0);
>>         PdfCopy allHHCopy = new PdfCopy(allHHDocument, out);
>>         allHHDocument.open();
>>         int count=1;
>>         for(String householdID : list)
>>         {
>>             System.out.println(count++);
>>             PdfReader r = new
>> PdfReader(jobDir.getAbsolutePath()+"/"+householdID + ".pdf");
>>                 int numberOfPages = r.getNumberOfPages();
>>                 for(int j = 1; j <= numberOfPages; j++)
>>                     
>> allHHCopy.addPage(allHHCopy.getImportedPage(r, j));
>>                 allHHCopy.freeReader(r);
>> 
>>         }
>>         allHHDocument.close();
>>         allHHCopy.close();
>>         out.close();
>>         System.exit(1);
>> 
>> 
>>     }
>> 
>> 
>> 
>> Paulo Soares wrote:
>> > 
>> > I've setup this small program:
>> > 
>> > Document doc = new Document();
>> > PdfCopy cp = new PdfCopy(doc, new FileOutputStream("big.pdf"));
>> > doc.open();
>> > for (int i = 0; i < 1000; ++i) {
>> >     System.out.println(i);
>> >     PdfReader r = new 
>> PdfReader("Apache_Axis_Live_SampleChapter.pdf");
>> >     for (int k = 1; k <= r.getNumberOfPages(); ++k) {
>> >         cp.addPage(cp.getImportedPage(r, k));
>> >     }
>> >     cp.freeReader(r);
>> > }
>> > doc.close();
>> > 
>> > The file Apache_Axis_Live_SampleChapter.pdf has 19 pages 
>> and 500k. The
>> > jvm has 64M of heap. If I comment out cp.freeReader(r) it 
>> only writes
>> > 144 times before throwing an out of memory exception otherwise the
>> > resulting pdf has 19000 pages and 500M.
>> > 
>> > Paulo  
>> > 
>> >> -----Original Message-----
>> >> From: [EMAIL PROTECTED] 
>> >> [mailto:[EMAIL PROTECTED] On 
>> >> Behalf Of Paulo Soares
>> >> Sent: Monday, October 08, 2007 6:25 PM
>> >> To: Post all your questions about iText here
>> >> Subject: Re: [iText-questions] Out of memory when 
>> >> concatenating -- I haveread the previous threads
>> >> 
>> >> Instead of using a FileInputStream, use just the file name in the
>> >> PdfReader constructor. It shouldn't make a difference but 
>> who knows?
>> >> 
>> >> Paulo 
>> >> 
>> >> > -----Original Message-----
>> >> > From: [EMAIL PROTECTED] 
>> >> > [mailto:[EMAIL PROTECTED] On 
>> >> > Behalf Of dhyton
>> >> > Sent: Monday, October 08, 2007 3:55 PM
>> >> > To: itext-questions@lists.sourceforge.net
>> >> > Subject: Re: [iText-questions] Out of memory when 
>> >> > concatenating -- I have read the previous threads
>> >> > 
>> >> > 
>> >> > I realize that some memory must be maintained for 
>> >> references for page
>> >> > addresses and so on.  Still, in my case my pdf is about 72k 
>> >> > (3 pages) and
>> >> > the memory is growing by about 80k per PDF iteration. To me 
>> >> that would
>> >> > indicate the entire pdf is being held in memory.  I have 
>> >> > tested with both
>> >> > checking the free memory in runtime and using JProfiler.
>> >> > 
>> >> > I know the api and forum indicate this is not so, my testing 
>> >> > would indicate
>> >> > otherwise, and I have to resolve this issue one way or 
>> >> > another because this
>> >> > issue is critical to our application. 
>> >> > 
>> >> > 
>> >> > 
>> >> > Paulo Soares wrote:
>> >> > > 
>> >> > > You'll have to throw more memory at the jvm. More pages 
>> >> require more
>> >> > > memory 
>> >> > > even if all that it's kept in memory is references to the 
>> >> > page addresses.
>> >> > > 
>> >> > > Paulo
>> >> > > 
>> >> > > ----- Original Message ----- 
>> >> > > From: "dhyton" <[EMAIL PROTECTED]>
>> >> > > To: <itext-questions@lists.sourceforge.net>
>> >> > > Sent: Friday, October 05, 2007 6:36 PM
>> >> > > Subject: [iText-questions] Out of memory when concatenating 
>> >> > -- I have read 
>> >> > > the previous threads
>> >> > > 
>> >> > > 
>> >> > >>
>> >> > >> I am having a problem with running out of memory when 
>> >> concatenating
>> >> > >> files. 
>> >> > >> I
>> >> > >> did search the issues and have found no solution.   Below 
>> >> > is my code. The
>> >> > >> output steam that is passed into the method for testing is a
>> >> > >> FileOutputStream. I am currently using itext 1.3 but I 
>> >> > tested and had the
>> >> > >> same issue with the latest release. You can see I use a 
>> >> > PDFCopy and do
>> >> > >> not
>> >> > >> keep any references to the reader around.
>> >> > >>
>> >> > >> The heap seems to grow very quickly.
>> >> > >>
>> >> > >> Can you please help?
>> >> > >> Thanks
>> >> > >> David
>> >> > >>
>> >> > >>    public static void assembleOnLargePDF(Rectangle 
>> >> > pageSize, OutputStream
>> >> > >> sos, File jobDir, boolean paper, boolean preview, 
>> List<String> 
>> >> > >> householdIDs)
>> >> > >> throws DocumentException, IOException
>> >> > >>    {
>> >> > >>
>> >> > >>        Runtime runtime = Runtime.getRuntime();
>> >> > >>        System.gc();
>> >> > >>
>> >> > >>        System.out.println("Starting Process " + 
>> >> > runtime.freeMemory() + " 
>> >> > >> of
>> >> > >> " + runtime.maxMemory());
>> >> > >>        Document allHHDocument = new Document(pageSize, 
>> >> 0, 0, 0, 0);
>> >> > >>        PdfCopy allHHCopy = new PdfCopy(allHHDocument, sos);
>> >> > >>
>> >> > >>        allHHDocument.open();
>> >> > >>        int count = 0;
>> >> > >>        for(String householdID : householdIDs)
>> >> > >>        {
>> >> > >>            System.gc();
>> >> > >>            System.out.println("Starting HH " + (count) + " " +
>> >> > >> runtime.freeMemory());
>> >> > >>
>> >> > >>            FileInputStream fileInputStream;
>> >> > >>            try
>> >> > >>            {
>> >> > >>                File file = new File(jobDir, householdID 
>> >> + ".pdf");
>> >> > >>                if(file.exists()) fileInputStream = new
>> >> > >> FileInputStream(file);
>> >> > >>                else continue;
>> >> > >>            }
>> >> > >>            catch(FileNotFoundException e)
>> >> > >>            {
>> >> > >>                continue;
>> >> > >>            }
>> >> > >>            PdfReader r = null;
>> >> > >>            try
>> >> > >>            {
>> >> > >>                r = new PdfReader(fileInputStream);
>> >> > >>                int numberOfPages = r.getNumberOfPages();
>> >> > >>                for(int j = 1; j <= numberOfPages; j++)
>> >> > >>                {
>> >> > >>                    
>> >> > allHHCopy.addPage(allHHCopy.getImportedPage(r, j));
>> >> > >>
>> >> > >>                }
>> >> > >>
>> >> > >>                allHHCopy.freeReader(r);
>> >> > >>
>> >> > >>                sos.flush();
>> >> > >>            }
>> >> > >>            finally
>> >> > >>            {
>> >> > >>                if(r != null)
>> >> > >>                    r.close();
>> >> > >>                fileInputStream.close();
>> >> > >>            }
>> >> > >>            System.gc();
>> >> > >>
>> >> > >>            System.out.println("Ending HH " + (count++) + " " +
>> >> > >> runtime.freeMemory());
>> >> > >>        }
>> >> > >>        allHHDocument.close();
>> >> > >>        allHHCopy.close();
>> >> > >>        System.gc();
>> >> > >>        System.out.println("Ending Process " + 
>> >> > runtime.freeMemory());
>> >> > >>
>> >> > >>    }
> 
> 
> Aviso Legal:
> 
> Esta mensagem é destinada exclusivamente ao destinatário. Pode conter
> informação confidencial ou legalmente protegida. A incorrecta transmissão
> desta mensagem não significa a perca de confidencialidade. Se esta
> mensagem for recebida por engano, por favor envie-a de volta para o
> remetente e apague-a do seu sistema de imediato. É proibido a qualquer
> pessoa que não o destinatário de usar, revelar ou distribuir qualquer
> parte desta mensagem. 
> 
> 
> 
> Disclaimer:
> 
> This message is destined exclusively to the intended receiver. It may
> contain confidential or legally protected information. The incorrect
> transmission of this message does not mean the loss of its
> confidentiality. If this message is received by mistake, please send it
> back to the sender and delete it from your system immediately. It is
> forbidden to any person who is not the intended receiver to use,
> distribute or copy any part of this message.
> 
> 
> 
> 
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> iText-questions mailing list
> iText-questions@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/itext-questions
> Buy the iText book: http://itext.ugent.be/itext-in-action/
> 
> 
http://www.nabble.com/file/p13138874/Test%2BItext.zip Test+Itext.zip 
http://www.nabble.com/file/p13138874/Test%2BItext.zip Test+Itext.zip 
-- 
View this message in context: 
http://www.nabble.com/Out-of-memory-when-concatenating----I-have-read-the-previous-threads-tf4576701.html#a13138874
Sent from the iText - General mailing list archive at Nabble.com.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
iText-questions mailing list
iText-questions@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/itext-questions
Buy the iText book: http://itext.ugent.be/itext-in-action/

Reply via email to