Please take a look at WALPlayer.java in hbase where you can find example of
how MultiTableOutputFormat is used.

Cheers


On Tue, Aug 26, 2014 at 10:04 AM, yeshwanth kumar <[email protected]>
wrote:

> hi ted,
>
> how can we intialise the mapper if i comment out those lines
>
>
>
> On Tue, Aug 26, 2014 at 10:08 PM, Ted Yu <[email protected]> wrote:
>
> > TableMapReduceUtil.initTableMapperJob(otherArgs[0], scan,
> > EntitySearcherMapper.class, ImmutableBytesWritable.class, Put.class,
> > job);//otherArgs[0]=i1
> >
> > You're initializing with table 'i1'
> > Please remove the above call and try again.
> >
> > Cheers
> >
> >
> > On Tue, Aug 26, 2014 at 9:18 AM, yeshwanth kumar <[email protected]>
> > wrote:
> >
> > > hi i am running  HBase 0.94.20  on Hadoop 2.2.0
> > >
> > > i am using MultiTableOutputFormat,
> > > for writing processed output to two different tables in hbase.
> > >
> > > here's the code snippet
> > >
> > > private ImmutableBytesWritable tab_cr = new ImmutableBytesWritable(
> > > Bytes.toBytes("i1")); private ImmutableBytesWritable tab_cvs = new
> > > ImmutableBytesWritable( Bytes.toBytes("i2"));
> > >
> > > @Override
> > > public void map(ImmutableBytesWritable row, final Result value,
> > > final Context context) throws IOException, InterruptedException {
> > >
> > > -----------------------------------------
> > > Put pcvs = new Put(entry.getKey().getBytes());
> > > pcvs.add("cf".getBytes(),"type".getBytes(),column.getBytes());
> > > Put put = new Put(value.getRow());
> > > put.add("Entity".getBytes(), "json".getBytes(),
> > > entry.getValue().getBytes());
> > > context.write(tab_cr, put);// table i1 context.write(tab_cvs,
> > pcvs);//table
> > > i2
> > >
> > > }
> > >
> > > job.setJarByClass(EntitySearcherMR.class);
> > > job.setMapperClass(EntitySearcherMapper.class);
> > > job.setOutputFormatClass(MultiTableOutputFormat.class); Scan scan = new
> > > Scan(); scan.setCacheBlocks(false);
> > > TableMapReduceUtil.initTableMapperJob(otherArgs[0], scan,
> > > EntitySearcherMapper.class, ImmutableBytesWritable.class, Put.class,
> > > job);//otherArgs[0]=i1
> > TableMapReduceUtil.initTableReducerJob(otherArgs[0],
> > > null, job); job.setNumReduceTasks(0);
> > >
> > > mapreduce job fails by saying nosuchcolumnfamily "cf" exception, in
> table
> > > i1
> > > i am writing data to two different columnfamilies one in each table, cf
> > > belongs to table i2.
> > > does the columnfamilies should present in both tables??
> > > is there anything i am missing
> > > can someone point me in the right direction
> > >
> > > thanks,
> > > yeshwanth.
> > >
> >
>

Reply via email to