[
https://issues.apache.org/jira/browse/PDFBOX-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tilman Hausherr updated PDFBOX-5229:
------------------------------------
Description:
>From Gunnar Brand:
{quote}There was a severe performance issue with really big masks if the image
needs to be scaled to it (i.e. 10000*10000 pixels). Scaling bicubic can take
6-10 seconds. This patch tries to switch to bilinear resizing for these cases,
although the threshold might have to be fine tuned, still.
There was also a double allocation for the final masked image when we can
simply use the image since applyMask() is always fed with a newly created one.
Reference hogging and needless allocation have been removed.
Additionally the alpha blending routines were very slow, working on pixels.
There is now a staggered approach by:
* direct byte masking which is very fast even for big images (right now does
not work with padded buffers),
* exploiting data buffer's sample system to merge the alpha component into the
ARGB image, letting the sample model do the bit masking,
* slow pixel expansion to reverse premultiply matte values (but using fixed
point integer arithmetics).
Additionally also using the interpolation flag of the mask to decide if the
mask should be interpolated.
{quote}
Attached: page 1 of https://archive.org/details/AlfaWaffenkatalog1911
was:
>From Gunnar Brand:
{quote}There was a severe performance issue with really big masks if the image
needs to be scaled to it (i.e. 10000*10000 pixels). Scaling bicubic can take
6-10 seconds. This patch tries to switch to bilinear resizing for these cases,
although the threshold might have to be fine tuned, still.
There was also a double allocation for the final masked image when we can
simply use the image since applyMask() is always fed with a newly created one.
Reference hogging and needless allocation have been removed.
Additionally the alpha blending routines were very slow, working on pixels.
There is now a staggered approach by:
* direct byte masking which is very fast even for big images (right now does
not work with padded buffers),
* exploiting data buffer's sample system to merge the alpha component into the
ARGB image, letting the sample model do the bit masking,
* slow pixel expansion to reverse premultiply matte values (but using fixed
point integer arithmetics).
Additionally also using the interpolation flag of the mask to decide if the
mask should be interpolated.
{quote}
> Optimize reading of masked images
> ---------------------------------
>
> Key: PDFBOX-5229
> URL: https://issues.apache.org/jira/browse/PDFBOX-5229
> Project: PDFBox
> Issue Type: Improvement
> Components: Rendering
> Affects Versions: 2.0.24
> Reporter: Tilman Hausherr
> Assignee: Tilman Hausherr
> Priority: Major
> Labels: optimization
> Fix For: 2.0.25, 3.0.0 PDFBox
>
> Attachments: Alfa-p1.pdf
>
>
> From Gunnar Brand:
> {quote}There was a severe performance issue with really big masks if the
> image needs to be scaled to it (i.e. 10000*10000 pixels). Scaling bicubic can
> take 6-10 seconds. This patch tries to switch to bilinear resizing for these
> cases, although the threshold might have to be fine tuned, still.
> There was also a double allocation for the final masked image when we can
> simply use the image since applyMask() is always fed with a newly created
> one. Reference hogging and needless allocation have been removed.
> Additionally the alpha blending routines were very slow, working on pixels.
> There is now a staggered approach by:
> * direct byte masking which is very fast even for big images (right now does
> not work with padded buffers),
> * exploiting data buffer's sample system to merge the alpha component into
> the ARGB image, letting the sample model do the bit masking,
> * slow pixel expansion to reverse premultiply matte values (but using fixed
> point integer arithmetics).
> Additionally also using the interpolation flag of the mask to decide if the
> mask should be interpolated.
> {quote}
> Attached: page 1 of https://archive.org/details/AlfaWaffenkatalog1911
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]