No there is no official solution to handle this.
Here's what could be done (in addition to just fork the project, or use
reflection)
- do a setUnicode() on the TextPosition elements in the stripper
- create the encoding and replace the fonts before extracting. For that
you'd have to have to find out how the encoding is stored. (probably in
"differences")
If it doesn't work, you may have to disable the cache or use your own.
Tilman
On 10.04.2025 23:48, NH Rao wrote:
Greetings,
Some of the PDF files we process do not have unicode information defined
for its type 3 fonts. I am in the process of migrating ancient code (based
on version 1.8 to the latest version). Since the characters are imited to
ASCII characters, we dumped checksum of a glyph and character to a map.
With processing enough files, we managed to get checksums for all the
characters we care about. At runtime, we get font glyph, compute it's
checksum and set equivalent unicode using code that looks similar to
follows
font.getFontEncoding().addCharacterEncoding(letterChar, charName);
font.getToUnicodeCMap().addMapping(new byte[] { (byte) i }, letter);
With these changes, the rest of the text stripper code works as expected as
it's able to find the required information.
We're trying to migrate to the latest released version of PDF. I believe
some of these methods are now package protected
e.g. org.apache.pdfbox.pdmodel.font.encoding.Encoding.add(int, String).
Also comment on the method seems to discourage our workaround.
I am not able to figure out which method I need to call for unicode mapping
in the second line of the above code example.
What will be a solution to handle this? The solution of mapping glyph to
character does work for us even though we created the map manually.
Regards,
Niranjan
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@pdfbox.apache.org
For additional commands, e-mail: users-h...@pdfbox.apache.org