Gopal,

No, just follow the http://hbase.apache.org/book.html#hadoop document
to have it working. It is important to read that whole section (as the
section itself states).

To quote the specific, last paragraph: "Because HBase depends on
Hadoop, it bundles an instance of the Hadoop jar under its lib
directory. The bundled jar is ONLY for use in standalone mode. In
distributed mode, it is critical that the version of Hadoop that is
out on your cluster match what is under HBase. Replace the hadoop jar
found in the HBase lib directory with the hadoop jar you are running
on your cluster to avoid version mismatch issues. Make sure you
replace the jar in HBase everywhere on your cluster. Hadoop version
mismatch issues have various manifestations but often all looks like
its hung up."

On Tue, Mar 6, 2012 at 9:41 AM, Gopal <[email protected]> wrote:
> On 3/5/2012 11:07 PM, Harsh J wrote:
>>
>> 1. What exactly is incompatible with a 0.20.203 mix? What error made
>
> java.io.IOException: Call to master/192.168.1.76:56310 failed on local
> exception: java.io.EOFException
>        at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>        at org.apache.hadoop.ipc.Client.call(Client.java:743)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy6.getProtocolVersion(Unknown Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>        at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>        at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>        at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
>        at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363)
>        at
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81)
>        at
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:342)
>        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:279)
>        at
> org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCommandLine.java:193)
>        at java.lang.Thread.run(Thread.java:662)
>
> The moment I upgrade hadoop to 0.20.203 , I start getting the errors above.
> With everything else been the same.
>
> I can generate a test case on demand. Do you want me to give it a shot again
> ?
>



-- 
Harsh J

Reply via email to