You can try to add the following to your shell:

In bin/compute-classpath.sh, append the JAR lzo JAR from Mapreduce:

CLASSPATH=$CLASSPATH:$HADOOP_HOME/share/hadoop/mapreduce/lib/hadoop-lzo.jar
export JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:$HADOOP_HOME/lib/native/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/


In bin/spark-class, before the JAVA_OPTS:
Add the following:

SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:$HADOOP_HOME/lib/native/

This fixed my problem.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-0-9-0-incubation-Apache-Hadoop-2-2-0-YARN-encounter-Compression-codec-com-hadoop-compression-ld-tp2793p3226.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to