I'm encountering a hadoop client protocol mismatch trying to read from HDFS
(cdh3u5) using the pre-build spark from the downloads page (linked under
"For Hadoop 1 (HDP1, CDH3)"). I've also  followed the instructions at
http://spark.apache.org/docs/latest/hadoop-third-party-distributions.html
(i.e. building the app against hadoop-client 0.20.2-cdh3u5), but continue
to see the following error regardless of whether I link the app with the
cdh client:

14/07/25 09:53:43 INFO client.AppClient$ClientActor: Executor updated:
app-20140725095343-0016/1 is now RUNNING
14/07/25 09:53:43 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
14/07/25 09:53:43 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread "main" org.apache.hadoop.ipc.RPC$VersionMismatch:
Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch.
(client = 61, server = 63)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:401)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)


While I can build spark against the exact hadoop distro version, I'd rather
work with the standard prebuilt binaries, making additional changes while
building the app if necessary. Any workarounds/recommendations?

Thanks,
Bharath

Reply via email to