I installed the custom as a standalone mode as normal. The master and slaves
started successfully.
However, I got error when I ran a job. It seems to me from the error message
the some library was compiled against hadoop1, but my spark was compiled
against hadoop2.
15/01/08 23:27:36 INFO
I ran this with CDH 5.2 without a problem (sorry don't have 5.3
readily available at the moment):
$ HBASE='/opt/cloudera/parcels/CDH/lib/hbase/\*'
$ spark-submit --driver-class-path $HBASE --conf
spark.executor.extraClassPath=$HBASE --master yarn --class
org.apache.spark.examples.HBaseTest
I ran the release spark in cdh5.3.0 but got the same error. Anyone tried to
run spark in cdh5.3.0 using its newAPIHadoopRDD?
command:
spark-submit --master spark://master:7077 --jars
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar
On Thu, Jan 8, 2015 at 3:33 PM, freedafeng freedaf...@yahoo.com wrote:
I installed the custom as a standalone mode as normal. The master and slaves
started successfully.
However, I got error when I ran a job. It seems to me from the error message
the some library was compiled against hadoop1,
Could anyone come up with your experience on how to do this?
I have created a cluster and installed cdh5.3.0 on it with basically core +
Hbase. but cloudera installed and configured the spark in its parcels
anyway. I'd like to install our custom spark on this cluster to use the
hadoop and hbase
Disclaimer: CDH questions are better handled at cdh-us...@cloudera.org.
But the question I'd like to ask is: why do you need your own Spark
build? What's wrong with CDH's Spark that it doesn't work for you?
On Thu, Jan 8, 2015 at 3:01 PM, freedafeng freedaf...@yahoo.com wrote:
Could anyone come