[ 
https://issues.apache.org/jira/browse/PDFBOX-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14992029#comment-14992029
 ] 

Tilman Hausherr commented on PDFBOX-3088:
-----------------------------------------

Works nicely and has amazing speed improvement on my system. However... we are 
already caching glyph paths in TTFGlyph2D.getPathForGID() ?! More debugging 
showed that repeated calls occured in elements of composite glyphs, but others 
too. I'll create a task for this so that we don't forget.

> Cache glyph table to optimize concurrent access
> -----------------------------------------------
>
>                 Key: PDFBOX-3088
>                 URL: https://issues.apache.org/jira/browse/PDFBOX-3088
>             Project: PDFBox
>          Issue Type: Improvement
>          Components: FontBox
>    Affects Versions: 2.0.0
>            Reporter: ccouturi
>            Assignee: Tilman Hausherr
>            Priority: Minor
>              Labels: Optimization
>             Fix For: 2.0.0
>
>         Attachments: 0001-PDFBOX-3088-cache-glyph-table.patch, 
> Benchmark.java, BenchmarkPDFBox3088.java, test_medium.pdf
>
>
> If several threads convert several pdf to png (one thread access to a single 
> document at a time) they are a contention on a lock in GlythTable. Jstack 
> shows that all threads are in state blocked on the synchronized block  in the 
> getGlyph method. The lock is necessary, it's ok, but degrades performance.
> This patch cache glyphs already read. 
> With the patch PDFBOX-3080, the follow benchmark compare 1000 pdf conversions 
> with 1, 8, and 50 threads.
> || Simulation|| PDF 2.0-SNAPSHOT || With this patch + PDFBOX3080 ||
> || 1000 conversions / 1 thread | 120 s | 71 s|
> || 1000 conversions / 8 threads | 76 s | 28 s|
> || 1000 conversions / 50 threads | 81 s | 33 s|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to