Partitioner class not used in TableMapReduceUtil.initTableReduceJob()
---------------------------------------------------------------------
Key: HBASE-1287
URL: https://issues.apache.org/jira/browse/HBASE-1287
Project: Hadoop HBase
Issue Type: Bug
Components: mapred
Reporter: Lars George
Assignee: Lars George
Upon checking the available utility methods in TableMapReduceUtil I came across
this code:
{code}
public static void initTableReduceJob(String table,
Class<? extends TableReduce> reducer, JobConf job, Class partitioner)
throws IOException {
job.setOutputFormat(TableOutputFormat.class);
job.setReducerClass(reducer);
job.set(TableOutputFormat.OUTPUT_TABLE, table);
job.setOutputKeyClass(ImmutableBytesWritable.class);
job.setOutputValueClass(BatchUpdate.class);
if (partitioner != null) {
job.setPartitionerClass(HRegionPartitioner.class);
HTable outputTable = new HTable(new HBaseConfiguration(job), table);
int regions = outputTable.getRegionsInfo().size();
if (job.getNumReduceTasks() > regions){
job.setNumReduceTasks(outputTable.getRegionsInfo().size());
}
}
}
{code}
It seems though as it should be
{code}
if (partitioner != null) {
job.setPartitionerClass(partitioner);
{code}
and the provided HRegionPartitioner can be handed in to that call or a custom
one can be provided.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.