[ 
https://issues.apache.org/jira/browse/PHOENIX-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2417:
----------------------------------
    Attachment: PHOENIX-2417_encoder.diff

Here's an encoder and decoder class without Parquet dependencies. See the test 
for an example of usage. The remaining work is to adjust the GuidePostsInfo 
class to
- use the PrefixByteEncoder to produce the bytes of the guideposts instead of 
having a List<byte[]> member variable that stores the byte arrays for all 
guideposts explicitly.
- store the number of guideposts encoded in a separate member variable and 
replace calls to getGuidePosts().size() with this instead.
- store the max byte array length in a separate member variable to use at 
decode time.
- replace code in BaseResultIterators that iterates through the guideposts with 
usage of PrefixByteDecoder instead.

WDYT, [~samarthjain]?

> Compress memory used by row key byte[] of guideposts
> ----------------------------------------------------
>
>                 Key: PHOENIX-2417
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2417
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: James Taylor
>            Assignee: Samarth Jain
>             Fix For: 4.7.0
>
>         Attachments: PHOENIX-2417_encoder.diff
>
>
> We've found that smaller guideposts are better in terms of minimizing any 
> increase in latency for point scans. However, this increases the amount of 
> memory significantly when caching the guideposts on the client. Guidepost are 
> equidistant row keys in the form of raw byte[] which are likely to have a 
> large percentage of their leading bytes in common (as they're stored in 
> sorted order. We should use a simple compression technique to mitigate this. 
> I noticed that Apache Parquet has a run length encoding - perhaps we can use 
> that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to