Dear List,

we're trying to use a central HDFS storage in order to be accessed from various other Hadoop-Distributions.

Do you think this is possible? We're having trouble, but not related to different RPC-Versions.

When trying to access a Cloudera CDH3 Update 2 (cdh3u2) HDFS from BigInsights 1.3 we're getting this error:

Bad connection to FS. Command aborted. Exception: Call to localhost.localdomain/127.0.0.1:50070 failed on local exception: java.io.EOFException java.io.IOException: Call to localhost.localdomain/127.0.0.1:50070 failed on local exception: java.io.EOFException
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1142)
        at org.apache.hadoop.ipc.Client.call(Client.java:1110)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
        at $Proxy0.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:213)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:180)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at com.ibm.biginsights.hadoop.patch.PatchedDistributedFileSystem.initialize(PatchedDistributedFileSystem.java:19) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:111)
        at org.apache.hadoop.fs.FsShell.init(FsShell.java:82)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:1785)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:1939)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:815)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:724)


But we've already replaced the client hadoop-common.jar's with the Cloudera ones.

Please note also that we're getting an EOFException and not an RPC.VersionMismatch.

FsShell.java:

        try {
            init();
        } catch (RPC.VersionMismatch v) {
            System.err.println("Version Mismatch between client and server"
                    + "... command aborted.");
            return exitCode;
        } catch (IOException e) {
            System.err.println("Bad connection to FS. command aborted.");
            System.err
.println("Bad connection to FS. Command aborted. Exception: "
                            + e.getLocalizedMessage());
            e.printStackTrace();
            return exitCode;
        }

Any ideas?

Reply via email to