[
https://issues.apache.org/jira/browse/HBASE-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13713495#comment-13713495
]
Anoop Sam John commented on HBASE-8949:
---------------------------------------
The change looks reasonable to me. Only one concern is before this patch , for
all the data import block size being used was value of
"hbase.mapreduce.hfileoutputformat.blocksize". If some one changes this conf
value in xml for changing the block size during bulk load, from now on that
wont get applied right?
> hbase.mapreduce.hfileoutputformat.blocksize should configure with blocksize
> of a table
> --------------------------------------------------------------------------------------
>
> Key: HBASE-8949
> URL: https://issues.apache.org/jira/browse/HBASE-8949
> Project: HBase
> Issue Type: Bug
> Components: mapreduce
> Reporter: rajeshbabu
> Assignee: rajeshbabu
> Fix For: 0.98.0, 0.95.2, 0.94.11
>
> Attachments: HBASE-8949_94.patch, HBASE-8949_trunk.patch
>
>
> While initializing mapreduce job we are not configuring
> hbase.mapreduce.hfileoutputformat.blocksize, so hfiles are always creating
> with 64kb (default)block size even though tables has different block size.
> We need to configure it with block size from table descriptor.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira