[
https://issues.apache.org/jira/browse/HBASE-11827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14118760#comment-14118760
]
Andrew Purtell commented on HBASE-11827:
----------------------------------------
bq. Is there no alternate way apart from triggering major compaction ? Because
with several TB of data, rewriting data all again doesn't seems to be a good
idea :)
One thing that comes to mind is a set of related changes:
1. Compaction policy is pluggable. Introduce a compaction policy plugin that
selects unencrypted HFiles while avoiding rewrite of already encrypted files.
2. Extend the compaction request API so specifying a compaction policy per
request is possible.
3. Add a compaction driver to EncryptionUtil in hbase-client that uses the
above changes to submit compaction requests for unencrypted bulk loaded data.
> Encryption support for bulkloading data into table with encryption configured
> for hfile format 3
> ------------------------------------------------------------------------------------------------
>
> Key: HBASE-11827
> URL: https://issues.apache.org/jira/browse/HBASE-11827
> Project: HBase
> Issue Type: Improvement
> Components: mapreduce
> Affects Versions: 0.98.5
> Reporter: Kashif J S
> Assignee: Kashif J S
> Fix For: 2.0.0, 0.98.7
>
> Attachments: HBASE-11827-98-v1.patch, HBASE-11827-trunk-v1.patch
>
>
> The solution would be to add support to auto detect encryption parameters
> similar to other parameters like compression, datablockencoding, etc when
> encryption is enabled for hfile format 3.
> The current patch does the following:
> 1. Automatically detects encryption type and key in HFileOutputFormat &
> HFileOutputFormat2.
> 2. Uses Base64encoder/decoder for url passing of Encryption key which is in
> bytes format
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)