I modified my pom.xml according to the Spark pom.xml. It is working right now.
Hadoop2 classes are no longer packaged into my jar. Thanks.
From: eyc...@hotmail.com
To: so...@cloudera.com
CC: user@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Sat, 24 Jan 2015 07:30
id of dependency? An alternative way is to modify the code
> > in SparkHadoopMapReduceUtil.scala and put it into my own source code to
> > bypass the problem. Any comment on this? Thanks.
> >
> >
> > From: eyc...@hotmail.com
> &
_
> From: eyc...@hotmail.com
> To: so...@cloudera.com
> CC: user@spark.apache.org
> Subject: RE: spark 1.1.0 save data to hdfs failed
> Date: Fri, 23 Jan 2015 11:17:36 -0800
>
>
> Thanks. I looked at the dependency tree. I did not see any dependent jar
> of hadoop-core fr
?
Date: Fri, 23 Jan 2015 17:01:48 +
Subject: RE: spark 1.1.0 save data to hdfs failed
From: so...@cloudera.com
To: eyc...@hotmail.com
Are you receiving my replies? I have suggested a resolution. Look at the
dependency tree next.
On Jan 23, 2015 2:43 PM, "ey-chih chow" wrote:
I l
hadoop1. Any
suggestion how to resolve it?
Thanks.
> From: so...@cloudera.com
> Date: Fri, 23 Jan 2015 14:01:45 +0000
> Subject: Re: spark 1.1.0 save data to hdfs failed
> To: eyc...@hotmail.com
> CC: user@spark.apache.org
>
> These are all definitely symptoms of mixing
{ Class.forName(first)} catch { case e:
ClassNotFoundException =>Class.forName(second)} }
From: eyc...@hotmail.com
To: so...@cloudera.com
CC: user@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Fri, 23 Jan 2015 06:43:00 -0800
I looked i
Context] }
In other words, it is related to hadoop2, hadoop2-yarn, and hadoop1. Any
suggestion how to resolve it?
Thanks.
> From: so...@cloudera.com
> Date: Fri, 23 Jan 2015 14:01:45 +
> Subject: Re: spark 1.1.0 save data to hdfs failed
> To: eyc...@hotmail.com
> CC: user@
These are all definitely symptoms of mixing incompatible versions of libraries.
I'm not suggesting you haven't excluded Spark / Hadoop, but, this is
not the only way Hadoop deps get into your app. See my suggestion
about investigating the dependency tree.
On Fri, Jan 23, 2015 at 1:53 PM, ey-chih
gt; Date: Fri, 23 Jan 2015 10:41:12 +
> Subject: Re: spark 1.1.0 save data to hdfs failed
> To: eyc...@hotmail.com
> CC: user@spark.apache.org
>
> So, you should not depend on Hadoop artifacts unless you use them
> directly. You should mark Hadoop and Spark deps as provided.
So, you should not depend on Hadoop artifacts unless you use them
directly. You should mark Hadoop and Spark deps as provided. Then the
cluster's version is used at runtime with spark-submit. That's the
usual way to do it, which works.
If you need to embed Spark in your app and are running it outs
com.crowdstar.etl.ParseAndClean.main(ParseAndClean.scala)
... 6 moreFrom: eyc...@hotmail.com
To: so...@cloudera.com
CC: yuzhih...@gmail.com; user@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Thu, 22 Jan 2015 17:05:26 -0800
Thanks. But after I replace the maven dependence from
so...@cloudera.com
> Date: Thu, 22 Jan 2015 22:34:22 +
> Subject: Re: spark 1.1.0 save data to hdfs failed
> To: eyc...@hotmail.com
> CC: yuzhih...@gmail.com; user@spark.apache.org
>
> It means your client app is using Hadoop 2.x and your HDFS is Hadoop 1.x.
>
> On Thu, Jan 22
ismatch from 10.33.140.233:53776 got version 9 expected version
> 4
>
> What should I do to fix this?
>
> Thanks.
>
> Ey-Chih
>
>
> From: eyc...@hotmail.com
> To: yuzhih...@gmail.com
> CC: user@spark.apache.org
> Subject: RE: spark
...@gmail.com
CC: user@spark.apache.org
Subject: RE: spark 1.1.0 save data to hdfs failed
Date: Wed, 21 Jan 2015 23:12:56 -0800
The hdfs release should be hadoop 1.0.4.
Ey-Chih Chow
Date: Wed, 21 Jan 2015 16:56:25 -0800
Subject: Re: spark 1.1.0 save data to hdfs failed
From: yuzhih...@gmail.com
To
The hdfs release should be hadoop 1.0.4.
Ey-Chih Chow
Date: Wed, 21 Jan 2015 16:56:25 -0800
Subject: Re: spark 1.1.0 save data to hdfs failed
From: yuzhih...@gmail.com
To: eyc...@hotmail.com
CC: user@spark.apache.org
What hdfs release are you using ?
Can you check namenode log around time of
What hdfs release are you using ?
Can you check namenode log around time of error below to see if there is
some clue ?
Cheers
On Wed, Jan 21, 2015 at 4:51 PM, ey-chih chow wrote:
> Hi,
>
> I used the following fragment of a scala program to save data to hdfs:
>
> contextAwareEvents
> .
16 matches
Mail list logo