On Thu, Sep 20, 2012 at 10:36 AM, John Edstrom
<[email protected]> wrote:
> Apologies, this sent before I had finished writing it :X
>
> The stack trace is below, but what we are attempting to do is load data
> into HBase via MapReduce. When we're doing the load, we write the HFiles
> using HFileOutputFormat and then bulk load them into HBase. At the start of
> the MapReduce, we don't have the region splits available to us, so all our
> data is getting written to a handful of regions. Therefore, we are
> attempting to manually split the regions before the MapReduce job so that
> the HFiles will be evenly written across many regions.
>
> Any help or guidance would be appreciated.
>

What version John?

Could you use this method when you create the table:
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#createTable(org.apache.hadoop.hbase.HTableDescriptor,
byte[][])

St.Ack

Reply via email to