Re: pdf transformation and xml file sizes

2002-04-18 Thread J.Pietschmann

caleb racey wrote:
> What factors limit the size of xml file you can transform to pdf?

There are several. However, you seem to have quite
another problem:

> er: javax.xml.transform.TransformerException: java.io.IOException:
> Connection reset by peer

You appear to read something over HTTP during the
transformation, and the supplier hangs up. There
could be a gadzillion of reasons for it.

Check whether your source XML contains a DOCTYPE
declaration pointing to a remote DTD (via a http://...
URL). Remove it if found. Check whether you read
your XML source over a network connection. Check
any document() calls use a http: URL.

J. Pietschmann


-
Please check that your question has not already been answered in the
FAQ before posting. 

To unsubscribe, e-mail: <[EMAIL PROTECTED]>
For additional commands, e-mail: <[EMAIL PROTECTED]>




Re: pdf transformation and xml file sizes

2002-04-16 Thread Carolien

Erwin wrote:

> On Tue, 16 Apr 2002, Lajos Moczar wrote:
>
> > In my experience, it is memory that is the key factor. Running with
> > 512MB allocated to the JVM, I can produce 53 pages of PDF, but no more.
> > I would have thought that SAX-based processing would allow you to
> > process as much as you want, but obviously there is something with PDF
> > documents that eliminates this advantage.
> >
> > Regards,
> >
> > Lajos
> > galatea.com
> >
> I am able to produce a PDF of +/- 250 pages with 256MB of heapsize, but
> with no forward references. It is in fact FOP (in my experience) that uses
> so much memory and that forms a barrier.
>
> IIUC FOP has a Java object for almost everything on a page (if you have a
> paragraph on a page that contains some inline elements, this is
> encapsulated in a hierarchy of Java objects). Once a page is completely
> created and every reference is resolved, it is converted into its PDF
> equivalent, and the Java objects can be destroyed.
>
> If you have a lot of forward references (e.g. for page numbers in a Table
> Of Contents) it can take up a lot of memory, before everything is
> resolved.  Therefore, I am not using a Table Of Contents at the moment.
>
> I am not sure if this is completely clear, but this is my experience.

This is also my experience. I'm working on the same project as Erwin and I've
also enlarged my heapsize. It is now set at 256MB minimum. I'm still able to
generate a table of contents for the document although it takes some time
before the whole document (263 pages) is generated.

Greetings,

Carolien
--
http://carolien.ulyssis.org
http://htmlwedstrijd.ulyssis.org



-
Please check that your question has not already been answered in the
FAQ before posting. 

To unsubscribe, e-mail: <[EMAIL PROTECTED]>
For additional commands, e-mail: <[EMAIL PROTECTED]>




Re: pdf transformation and xml file sizes

2002-04-16 Thread Erwin

On Tue, 16 Apr 2002, Lajos Moczar wrote:

> In my experience, it is memory that is the key factor. Running with
> 512MB allocated to the JVM, I can produce 53 pages of PDF, but no more.
> I would have thought that SAX-based processing would allow you to
> process as much as you want, but obviously there is something with PDF
> documents that eliminates this advantage.
>
> Regards,
>
> Lajos
> galatea.com
>
I am able to produce a PDF of +/- 250 pages with 256MB of heapsize, but
with no forward references. It is in fact FOP (in my experience) that uses
so much memory and that forms a barrier.


IIUC FOP has a Java object for almost everything on a page (if you have a
paragraph on a page that contains some inline elements, this is
encapsulated in a hierarchy of Java objects). Once a page is completely
created and every reference is resolved, it is converted into its PDF
equivalent, and the Java objects can be destroyed.

If you have a lot of forward references (e.g. for page numbers in a Table
Of Contents) it can take up a lot of memory, before everything is
resolved.  Therefore, I am not using a Table Of Contents at the moment.

I am not sure if this is completely clear, but this is my experience.



-
Please check that your question has not already been answered in the
FAQ before posting. 

To unsubscribe, e-mail: <[EMAIL PROTECTED]>
For additional commands, e-mail: <[EMAIL PROTECTED]>




Re: pdf transformation and xml file sizes

2002-04-16 Thread Lajos Moczar

In my experience, it is memory that is the key factor. Running with 
512MB allocated to the JVM, I can produce 53 pages of PDF, but no more. 
I would have thought that SAX-based processing would allow you to 
process as much as you want, but obviously there is something with PDF 
documents that eliminates this advantage.

Regards,

Lajos
galatea.com


caleb racey wrote:

> What factors limit the size of xml file you can transform to pdf?
> 
> I'm testing out one of my pipelines that generates simple pdf from an
> xml file. When using a small (2kb) xml file it all works fine but as I
> begin to paste more (valid) xml into the file it stops working (at about
> 23k).  The pdf plugin in Internet explorer 6.0 says "the file doesn't
> start with %pdf" and cocoon throws the error 
> 
> FATAL_E (2002-04-16)
> 09:47.15:055[core.xslt-processor](/cocoon/demo/short.pdf)
> HttpProcessor[8080][4]/TraxErrorHandler: Error in TraxTransform
> er: javax.xml.transform.TransformerException: java.io.IOException:
> Connection reset by peer
> javax.xml.transform.TransformerException: java.io.IOException:
> Connection reset by peer
> at
> org.apache.xalan.templates.ElemLiteralResult.execute(ElemLiteralResult.j
> ava:725)
> at
> org.apache.xalan.transformer.TransformerImpl.executeChildTemplates(Trans
> formerImpl.java:2243)
> at
> org.apache.xalan.templates.ElemLiteralResult.execute(ElemLiteralResult.j
> ava:
> 710)
> 
> 
> see attached file for full error log. 
> 
> My server environment = Redhat linux 7.2, cocoon 2.0.2, jdk 1.3.1_02.
> 
> The xml and the xsl transformations are alright as they work on smaller
> files. 
> 
> Anyone know what is going on? Is this the IE acrobat problem that I have
> seen mentioned briefly on the lists. 
> 
> Cheers Cal
> 
> 
> 
> 
> -
> Please check that your question has not already been answered in the
> FAQ before posting. 
> 
> To unsubscribe, e-mail: <[EMAIL PROTECTED]>
> For additional commands, e-mail: <[EMAIL PROTECTED]>
> 



-
Please check that your question has not already been answered in the
FAQ before posting. 

To unsubscribe, e-mail: <[EMAIL PROTECTED]>
For additional commands, e-mail: <[EMAIL PROTECTED]>




pdf transformation and xml file sizes

2002-04-16 Thread caleb racey


What factors limit the size of xml file you can transform to pdf?

I'm testing out one of my pipelines that generates simple pdf from an
xml file. When using a small (2kb) xml file it all works fine but as I
begin to paste more (valid) xml into the file it stops working (at about
23k).  The pdf plugin in Internet explorer 6.0 says "the file doesn't
start with %pdf" and cocoon throws the error 

FATAL_E (2002-04-16)
09:47.15:055[core.xslt-processor](/cocoon/demo/short.pdf)
HttpProcessor[8080][4]/TraxErrorHandler: Error in TraxTransform
er: javax.xml.transform.TransformerException: java.io.IOException:
Connection reset by peer
javax.xml.transform.TransformerException: java.io.IOException:
Connection reset by peer
at
org.apache.xalan.templates.ElemLiteralResult.execute(ElemLiteralResult.j
ava:725)
at
org.apache.xalan.transformer.TransformerImpl.executeChildTemplates(Trans
formerImpl.java:2243)
at
org.apache.xalan.templates.ElemLiteralResult.execute(ElemLiteralResult.j
ava:
710)


see attached file for full error log. 

My server environment = Redhat linux 7.2, cocoon 2.0.2, jdk 1.3.1_02.

The xml and the xsl transformations are alright as they work on smaller
files. 

Anyone know what is going on? Is this the IE acrobat problem that I have
seen mentioned briefly on the lists. 

Cheers Cal



error.log
Description: error.log

-
Please check that your question has not already been answered in the
FAQ before posting. 

To unsubscribe, e-mail: <[EMAIL PROTECTED]>
For additional commands, e-mail: <[EMAIL PROTECTED]>