[
https://issues.apache.org/jira/browse/ORC-595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17041815#comment-17041815
]
Panagiotis Garefalakis commented on ORC-595:
--------------------------------------------
Just adding seom profile information to the patch – it seems that we are able
to avoid branch mispredictions in decimal64 decoding by removing the loop there.
As you can see in the attached profiles from 26% branch mispredictions we go
down to 0.
> Optimize Decimal64 scale calculation
> ------------------------------------
>
> Key: ORC-595
> URL: https://issues.apache.org/jira/browse/ORC-595
> Project: ORC
> Issue Type: Improvement
> Components: encoding
> Reporter: Panagiotis Garefalakis
> Assignee: Panagiotis Garefalakis
> Priority: Critical
> Attachments: DecimalBench-Clean-scale2.log,
> DecimalBench-ORC-595-scale2.log
>
> Time Spent: 10m
> Remaining Estimate: 0h
>
> Currently Decimal64 is using an inner loop to apply the correct scale to each
> read Long value
> [https://github.com/apache/orc/blob/master/java/core/src/java/org/apache/orc/impl/TreeReaderFactory.java#L1294]
> A more efficient way would be to apply the scale using a single array access
> and by multiplying by 10 in the power of (scale - scratchScaleVector[r]).
> An extra optimization would be to keep all powers of 10 (up to 18) in a
> static array and reuse it across runtime instead of calculating each time.
> cc: [~rameshkumar] [~gopalv]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)