[
https://issues.apache.org/jira/browse/HBASE-26635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yutong Xiao updated HBASE-26635:
--------------------------------
Description:
Currently, when we decode a byte array to a big decimal, there are plenty of
BigDecimal operations, which result in a lot of only once used BigDecimal
objects. Furthermore, the BigDecimal calculations are slow. We could boost this
function by using String concatenation and point movement of BigDecimal. The
JMH benchmark is uploaded to the attachment.
Also, I added a UT to test the encoding / decoding correctness of 200 random
test samples.
was:Currently, when we decode a byte array to a big decimal, there are plenty
of BigDecimal operations, which result in a lot of only once used BigDecimal
objects. Furthermore, the BigDecimal calculations are slow. We could boost this
function by using String concatenation and point movement of BigDecimal. The
JMH benchmark is uploaded to the attachment.
> Optimize decodeNumeric in OrderedBytes
> --------------------------------------
>
> Key: HBASE-26635
> URL: https://issues.apache.org/jira/browse/HBASE-26635
> Project: HBase
> Issue Type: Improvement
> Components: Performance
> Reporter: Yutong Xiao
> Assignee: Yutong Xiao
> Priority: Major
> Attachments: Benchmark-decoding.log
>
>
> Currently, when we decode a byte array to a big decimal, there are plenty of
> BigDecimal operations, which result in a lot of only once used BigDecimal
> objects. Furthermore, the BigDecimal calculations are slow. We could boost
> this function by using String concatenation and point movement of BigDecimal.
> The JMH benchmark is uploaded to the attachment.
> Also, I added a UT to test the encoding / decoding correctness of 200 random
> test samples.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)