Just out of curiously - why are you bloating the size of the PDF by using 
ASCIIHexDecode on these streams?

Leonard


On 11/24/08 9:58 PM, "Jose E. Marchesi" <[EMAIL PROTECTED]> wrote:



Hi David.

   I am writing crypt filter and it arises a doubt related to
   gnupdf's design. According to PDF Reference, encrpytion is defined
   (mostly) by a global dictionary. This dictionary holds a list of crypt
   filters will
   be used in the rest of the document.

   I suppose crypt filter module will receive parameters from its filter
   dictionary. However, it is not enought, it is even optional. We must
   consider global data, I think. Nevertheless these parameters will be
   known by upper layers.

There is not a 1-1 relationship between any PDF filter and a stm
filter, although it is common. Regardless the origin of the data we
should identify the needed stm filters that will allow us to implement
the processing of the higher-level PDF filters.

An example is the JBIG2 decoder. To describe several "images"
(XObjects) a PDF file can use the following streams:

A stream containing global segments to use with all the images:

10 0 obj
<< /Filter ASCIIHexDecode >>
stream
...
endstream
endobj

and several streams defining the images:

20 0 obj
<< /Filter [/ASCIIHexDecode /JBIG2Decode]
   /DecodeParms [null << /JBIG2Globals 10 0 R >>] >>
stream
...
endstream
endobj

30 0 obj
<< /Filter [/ASCIIHexDecode /JBIG2Decode]
   /DecodeParms [null << /JBIG2Globals 10 0 R >>] >>
stream
...
endstream
endobj

We would use only two stm filters to get the decoded JBIG2 data: one
stm filter per image, using as a parameter for the filters the
contents of the "10 0 R" PDF stream.





--
Leonard Rosenthol
PDF Standards Architect
Adobe Systems Incorporated


Reply via email to