Hi,everyone,
when i compile combine with hadoop version 2.6.0-cdh5.13.1 ,compile comand
is
./dev/make-distribution.sh --name 2.6.0-cdh5.13.1 --pip --tgz -Phive
-Phive-thriftserver -Pyarn -Dhadoop.version=2.6.0-cdh5.13.1,
there exists error like this:
[INFO] --- scala-maven-plugin:4.3.0:compile
After upgrading from apache spark 2.1.1 to 2.2.0 our integration test fail with
an exception:
java.lang.IllegalAccessError: tried to access method
com.google.common.base.Stopwatch.()V from class
org.apache.hadoop.mapred.FileInputFormat
at
https://spark.apache.org/docs/2.1.0/building-spark.html#specifying-the-hadoop-version
Version Hadoop v2.2.0 only is the default build version, but other versions can
still be built. The package you downloaded is prebuilt for Hadoop 2.7 as said
on the download page, don't worry.
Yohann Jardin
ords
after HDFS node restarted) I have been digging into this issue and have
started to suspect versions mismatch between Hadoop server and client. I
decided to look at Spark 2.1.1's pom.xml. It states hadoop,version as
2.2.0. There seems to be some mismtach here that I am not sure if that's
the
Hi
I am developing an application based on spark-1.6. my lib dependencies is
just as
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.6.0"
)
it use hadoop 2.2.0 as the default hadoop version which not my
preference.I want to change the hadoop
...@cloudera.com]
Sent: Thursday, February 12, 2015 12:13 AM
To: Akhil Das
Cc: Michael Nazario; user@spark.apache.org
Subject: Re: PySpark 1.2 Hadoop version mismatch
No, mr1 should not be the issue here, and I think that would break
other things. The OP is not using mr1.
client 4 / server 7 means roughly
wrote:
Did you have a look at
http://spark.apache.org/docs/1.2.0/building-spark.html
I think you can simply download the source and build for your hadoop version
as:
mvn -Dhadoop.version=2.0.0-mr1-cdh4.7.0 -DskipTests clean package
Thanks
Best Regards
On Thu, Feb 12, 2015 at 11:45 AM
Hi Spark users,
I seem to be having this consistent error which I have been trying to reproduce
and narrow down the problem. I've been running a PySpark application on Spark
1.2 reading avro files from Hadoop. I was consistently seeing the following
error:
py4j.protocol.Py4JJavaError: An
From: Michael Nazario
Sent: Wednesday, February 11, 2015 10:13 PM
To: user@spark.apache.org
Subject: PySpark 1.2 Hadoop version mismatch
Hi Spark users,
I seem to be having this consistent error which I have been trying to reproduce
and narrow down the problem. I've been
Did you have a look at
http://spark.apache.org/docs/1.2.0/building-spark.html
I think you can simply download the source and build for your hadoop
version as:
mvn -Dhadoop.version=2.0.0-mr1-cdh4.7.0 -DskipTests clean package
Thanks
Best Regards
On Thu, Feb 12, 2015 at 11:45 AM, Michael
Thank you!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/hadoop-version-tp10405p10485.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hi,
Where can I find the version of Hadoop my cluster is using? I launched my
ec2 cluster using the spark-ec2 script with the --hadoop-major-version=2
option. However, the folder hadoop-native/lib in the master node only
contains files that end in 1.0.0. Does that mean that I have Hadoop version
Hi Maria,
Having files that end with 1.0.0 means you're Spark 1.0, not Hadoop 1.0.
You can check your hadoop version by running $HADOOP_HOME/bin/hadoop
version, where HADOOP_HOME is set to your installation of hadoop. On the
clusters started by the Spark ec2 scripts, this should be
/root
Although you need to compile it differently for different versions of
HDFS / Hadoop, as far as I know Spark continues to work with Hadoop
1.x (and probably older 0.20.x as a result -- your experience is an
existence proof.) And it works with the newest Hadoop 2.4.x, again
with the appropriate
14 matches
Mail list logo