Hi guys,
I ran into the same exception (while trying the same example), and after
overriding hadoop-client artifact in my pom.xml, I got another error
(below).
System config:
ubuntu 12.04
intellijj 13.
scala 2.10.3
maven:
org.apache.spark
spark-core_2.10
1.
Yeah unfortunately Hadoop 2 requires these binaries on Windows. Hadoop 1 runs
just fine without them.
Matei
On Jun 3, 2014, at 10:33 AM, Sean Owen wrote:
> I'd try the internet / SO first -- these are actually generic
> Hadoop-related issues. Here I think you don't have HADOOP_HOME or
> simila
I'd try the internet / SO first -- these are actually generic
Hadoop-related issues. Here I think you don't have HADOOP_HOME or
similar set.
http://stackoverflow.com/questions/19620642/failed-to-locate-the-winutils-binary-in-the-hadoop-binary-path
On Tue, Jun 3, 2014 at 5:54 PM, toivoa wrote:
>
Wow! What a quick reply!
adding
org.apache.hadoop
hadoop-client
2.4.0
solved the problem.
But now I get
14/06/03 19:52:50 ERROR Shell: Failed to locate the winutils binary in the
hadoop binary path
java.io.IOException: Could
Hi
Set up project under Eclipse using Maven:
org.apache.spark
spark-core_2.10
1.0.0
Simple example fails:
def main(args: Array[String]): Unit = {
val conf = new SparkConf()
.setMaster("local")
"Found class org.apache.hadoop.mapreduce.TaskAttemptContext, but
interface was expected" is the classic error meaning "you compiled
against Hadoop 1, but are running against Hadoop 2"
I think you need to override the hadoop-client artifact that Spark
depends on to be a Hadoop 2.x version.
On Tue,