Hi J-D,

Thanks for the info.
I tried this but ended up with the following error. Any ideas?

Exception in thread "main" java.io.IOException: Please specify the peer cluster 
as hbase.zookeeper.quorum:zookeeper.znode.parent
        at 
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:172)

Regards

Stuart

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Jean-Daniel 
Cryans
Sent: 30 March 2011 17:34
To: [email protected]
Subject: Re: Changing Zookeeper address programmatically for reduces

That's basically what CopyTable does if I understand your need properly:

https://github.com/apache/hbase/blob/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java

J-D

On Wed, Mar 30, 2011 at 8:34 AM, Stuart Scott <[email protected]> wrote:
> Hi,
>
>
>
> I have a map/reduce job that reads from a Hbase Table and writes to 
> another.
>
> Does anyone know how to programmatically set the Zoo Keeper address 
> for a Reducer?
>
> I can create a job as below, set the IP address using .set... it works 
> fine for the Maper. The reducer defaults to localhost. We have a 
> couple of clusters and wanted to be able to connect to them at different 
> times.
>
>
>
>        HBaseConfiguration conf = new HBaseConfiguration();
>
>        conf.set("hbase.master", "44.128.161.86:60000");
>
>        conf.set("hbase.zookeeper.quorum","44.128.161.86");
>
>
>
> ...
>
>
>
>        Job job = new Job(conf, "MyJob_Job");
>
>        job.setJarByClass(MyClass.class);
>
>        Scan scan = new Scan();
>
>        String columns = "A:B";
>
>
>        scan.addColumns(columns);
>
>        scan.setStartRow(Bytes.toBytes(GetPaddedLongKey(1)));
>
>        scan.setStopRow(Bytes.toBytes(GetPaddedLongKey(100+1)));
>
>        TableMapReduceUtil.initTableMapperJob("MPI", scan, 
> Mapper2.class, ImmutableBytesWritable.class, IntWritable.class, job);
>
>
>
>        TableMapReduceUtil.initTableReducerJob("MyClass",
> Reducer1.class, job);
>
>
>
>        job.waitForCompletion(true);
>
>
>
> Runs the mapper without problem, then fails..
>
>
>
> 11/03/30 16:19:50 INFO mapred.LocalJobRunner:
>
> 11/03/30 16:19:50 INFO zookeeper.ZooKeeperWrapper: Reconnecting to 
> zookeeper
>
> 11/03/30 16:19:50 INFO zookeeper.ZooKeeper: Initiating client 
> connection, connectString=localhost:21810 sessionTimeout=60000
> watcher=org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper@ae97c4
>
> 11/03/30 16:19:50 INFO zookeeper.ClientCnxn: Opening socket connection 
> to server localhost/127.0.0.1:21810
>
>
>
> Closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>
>        at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
>
> 2011-03-30 16:30:14,637 INFO org.apache.zookeeper.ClientCnxn: Opening 
> socket connection to server localhost/127.0.0.1:21810
>
> 2011-03-30 16:30:14,638 WARN org.apache.zookeeper.ClientCnxn: Session
> 0x0 for server null, unexpected error, closing socket connection and 
> attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>
>
>
>
>
>
>
>
>
> Any help would be really appreciated.
>
>
>
> Stuart
>
>
>
>
>
>

Reply via email to