spark 3.1.1 combine hadoop(version 2.6.0-cdh5.13.1) compile error

2021-03-17 Thread jiahong li
Hi,everyone, when i compile combine with hadoop version 2.6.0-cdh5.13.1 ,compile comand is ./dev/make-distribution.sh --name 2.6.0-cdh5.13.1 --pip --tgz -Phive -Phive-thriftserver -Pyarn -Dhadoop.version=2.6.0-cdh5.13.1, there exists error like this: [INFO] --- scala-maven-plugin:4.3.0:compile

guava not compatible to hadoop version 2.6.5

2017-07-27 Thread Markus.Breuer
After upgrading from apache spark 2.1.1 to 2.2.0 our integration test fail with an exception: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.()V from class org.apache.hadoop.mapred.FileInputFormat at

Re: Spark 2.1.1 and Hadoop version 2.2 or 2.7?

2017-06-21 Thread yohann jardin
https://spark.apache.org/docs/2.1.0/building-spark.html#specifying-the-hadoop-version Version Hadoop v2.2.0 only is the default build version, but other versions can still be built. The package you downloaded is prebuilt for Hadoop 2.7 as said on the download page, don't worry. Yohann Jardin

Spark 2.1.1 and Hadoop version 2.2 or 2.7?

2017-06-20 Thread N B
ords after HDFS node restarted) I have been digging into this issue and have started to suspect versions mismatch between Hadoop server and client. I decided to look at Spark 2.1.1's pom.xml. It states hadoop,version as 2.2.0. There seems to be some mismtach here that I am not sure if that's the

chang hadoop version when import spark

2016-02-24 Thread YouPeng Yang
Hi I am developing an application based on spark-1.6. my lib dependencies is just as libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "1.6.0" ) it use hadoop 2.2.0 as the default hadoop version which not my preference.I want to change the hadoop

RE: PySpark 1.2 Hadoop version mismatch

2015-02-12 Thread Michael Nazario
...@cloudera.com] Sent: Thursday, February 12, 2015 12:13 AM To: Akhil Das Cc: Michael Nazario; user@spark.apache.org Subject: Re: PySpark 1.2 Hadoop version mismatch No, mr1 should not be the issue here, and I think that would break other things. The OP is not using mr1. client 4 / server 7 means roughly

Re: PySpark 1.2 Hadoop version mismatch

2015-02-12 Thread Sean Owen
wrote: Did you have a look at http://spark.apache.org/docs/1.2.0/building-spark.html I think you can simply download the source and build for your hadoop version as: mvn -Dhadoop.version=2.0.0-mr1-cdh4.7.0 -DskipTests clean package Thanks Best Regards On Thu, Feb 12, 2015 at 11:45 AM

PySpark 1.2 Hadoop version mismatch

2015-02-11 Thread Michael Nazario
Hi Spark users, I seem to be having this consistent error which I have been trying to reproduce and narrow down the problem. I've been running a PySpark application on Spark 1.2 reading avro files from Hadoop. I was consistently seeing the following error: py4j.protocol.Py4JJavaError: An

RE: PySpark 1.2 Hadoop version mismatch

2015-02-11 Thread Michael Nazario
From: Michael Nazario Sent: Wednesday, February 11, 2015 10:13 PM To: user@spark.apache.org Subject: PySpark 1.2 Hadoop version mismatch Hi Spark users, I seem to be having this consistent error which I have been trying to reproduce and narrow down the problem. I've been

Re: PySpark 1.2 Hadoop version mismatch

2015-02-11 Thread Akhil Das
Did you have a look at http://spark.apache.org/docs/1.2.0/building-spark.html I think you can simply download the source and build for your hadoop version as: mvn -Dhadoop.version=2.0.0-mr1-cdh4.7.0 -DskipTests clean package Thanks Best Regards On Thu, Feb 12, 2015 at 11:45 AM, Michael

Re: hadoop version

2014-07-23 Thread mrm
Thank you! -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/hadoop-version-tp10405p10485.html Sent from the Apache Spark User List mailing list archive at Nabble.com.

hadoop version

2014-07-22 Thread mrm
Hi, Where can I find the version of Hadoop my cluster is using? I launched my ec2 cluster using the spark-ec2 script with the --hadoop-major-version=2 option. However, the folder hadoop-native/lib in the master node only contains files that end in 1.0.0. Does that mean that I have Hadoop version

Re: hadoop version

2014-07-22 Thread Andrew Or
Hi Maria, Having files that end with 1.0.0 means you're Spark 1.0, not Hadoop 1.0. You can check your hadoop version by running $HADOOP_HOME/bin/hadoop version, where HADOOP_HOME is set to your installation of hadoop. On the clusters started by the Spark ec2 scripts, this should be /root

Re: Stable Hadoop version supported ?

2014-05-16 Thread Sean Owen
Although you need to compile it differently for different versions of HDFS / Hadoop, as far as I know Spark continues to work with Hadoop 1.x (and probably older 0.20.x as a result -- your experience is an existence proof.) And it works with the newest Hadoop 2.4.x, again with the appropriate