Hello,

We run M/R jobs to parse and process large and highly complex xml files
into AVRO files. Then we build external Hive tables on top the parsed Avro
files. The hive tables are partitioned by day; but they are still huge
partitions and joins do not perform that well. So I would like to try
out creating buckets on the join key. How do I create the buckets on the
existing HDFS files? I would prefer to avoid creating another set of tables
(bucketed) and load data from non-bucketed table to bucketed tables if at
all possible. Is it possible to do the bucketing in Java as part of the M/R
jobs while creating the Avro files?

Any help / insight would greatly be appreciated.

Thank you very much for your time and help.

Sadu

Reply via email to