Re: Spark on Yarn: java.lang.IllegalArgumentException: Invalid rule

2015-02-03 Thread maven

The version I'm using was already pre-built for Hadoop 2.3. 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Yarn-java-lang-IllegalArgumentException-Invalid-rule-tp21382p21485.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark on Yarn: java.lang.IllegalArgumentException: Invalid rule

2015-01-27 Thread maven
Thanks, Siddardha. I did but got the same error. Kerberos is enabled on my
cluster and I may be missing a configuration step somewhere. 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Yarn-java-lang-IllegalArgumentException-Invalid-rule-tp21382p21392.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Spark on Yarn: java.lang.IllegalArgumentException: Invalid rule

2015-01-26 Thread maven
All, 

I recently try to build Spark-1.2 on my enterprise server (which has Hadoop
2.3 with YARN). Here're the steps I followed for the build: 

$ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package 
$ export SPARK_HOME=/path/to/spark/folder 
$ export HADOOP_CONF_DIR=/etc/hadoop/conf 

However, when I try to work with this installation either locally or on
YARN, I get the following error: 

Exception in thread main java.lang.ExceptionInInitializerError 
at
org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:1784) 
at
org.apache.spark.storage.BlockManager.init(BlockManager.scala:105) 
at
org.apache.spark.storage.BlockManager.init(BlockManager.scala:180) 
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:292) 
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:159) 
at org.apache.spark.SparkContext.init(SparkContext.scala:232) 
at water.MyDriver$.main(MyDriver.scala:19) 
at water.MyDriver.main(MyDriver.scala) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 
at java.lang.reflect.Method.invoke(Method.java:606) 
at
org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:360) 
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) 
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
Caused by: org.apache.spark.SparkException: Unable to load YARN support 
at
org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:199)
 
at
org.apache.spark.deploy.SparkHadoopUtil$.init(SparkHadoopUtil.scala:194) 
at
org.apache.spark.deploy.SparkHadoopUtil$.clinit(SparkHadoopUtil.scala) 
... 15 more 
Caused by: java.lang.IllegalArgumentException: Invalid rule: L 
RULE:[2:$1@$0](.*@XXXCOMPANY.COM)s/(.*)@XXXCOMPANY.COM/$1/L 
DEFAULT 
at
org.apache.hadoop.security.authentication.util.KerberosName.parseRules(KerberosName.java:321)
 
at
org.apache.hadoop.security.authentication.util.KerberosName.setRules(KerberosName.java:386)
 
at
org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:75)
 
at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:247)
 
at
org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:283)
 
at
org.apache.spark.deploy.SparkHadoopUtil.init(SparkHadoopUtil.scala:43) 
at
org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.init(YarnSparkHadoopUtil.scala:45)
 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method) 
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
at java.lang.Class.newInstance(Class.java:374) 
at
org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:196)
 
... 17 more 

I noticed that when I unset HADOOP_CONF_DIR, I'm able to work in the local
mode without any errors. I'm able to work with pre-installed Spark 1.0,
locally and on yarn, without any issues. It looks like I may be missing a
configuration step somewhere. Any thoughts on what may be causing this? 

NR



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Yarn-java-lang-IllegalArgumentException-Invalid-rule-tp21382.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: java.lang.ExceptionInInitializerError/Unable to load YARN support

2015-01-14 Thread maven
All,

I'm still facing this issue. Any thoughts on how I can fix this?

NR



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-ExceptionInInitializerError-Unable-to-load-YARN-support-tp20775p21143.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



java.lang.ExceptionInInitializerError/Unable to load YARN support

2014-12-18 Thread maven
All, 

I just built Spark-1.2 on my enterprise server (which has Hadoop 2.3 with
YARN). Here're the steps I followed for the build: 

$ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package 
$ export SPARK_HOME=/path/to/spark/folder 
$ export HADOOP_CONF_DIR=/etc/hadoop/conf 

However, when I try to work with this installation either locally or on
YARN, I get the following error: 

Exception in thread main java.lang.ExceptionInInitializerError 
at
org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:1784) 
at
org.apache.spark.storage.BlockManager.init(BlockManager.scala:105) 
at
org.apache.spark.storage.BlockManager.init(BlockManager.scala:180) 
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:292) 
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:159) 
at org.apache.spark.SparkContext.init(SparkContext.scala:232) 
at water.MyDriver$.main(MyDriver.scala:19) 
at water.MyDriver.main(MyDriver.scala) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 
at java.lang.reflect.Method.invoke(Method.java:606) 
at
org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:360) 
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) 
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
Caused by: org.apache.spark.SparkException: Unable to load YARN support 
at
org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:199)
 
at
org.apache.spark.deploy.SparkHadoopUtil$.init(SparkHadoopUtil.scala:194) 
at
org.apache.spark.deploy.SparkHadoopUtil$.clinit(SparkHadoopUtil.scala) 
... 15 more 
Caused by: java.lang.IllegalArgumentException: Invalid rule: L 
RULE:[2:$1@$0](.*@XXXCOMPANY.COM)s/(.*)@XXXCOMPANY.COM/$1/L 
DEFAULT 
at
org.apache.hadoop.security.authentication.util.KerberosName.parseRules(KerberosName.java:321)
 
at
org.apache.hadoop.security.authentication.util.KerberosName.setRules(KerberosName.java:386)
 
at
org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:75)
 
at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:247)
 
at
org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:283)
 
at
org.apache.spark.deploy.SparkHadoopUtil.init(SparkHadoopUtil.scala:43) 
at
org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.init(YarnSparkHadoopUtil.scala:45)
 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method) 
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
at java.lang.Class.newInstance(Class.java:374) 
at
org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:196)
 
... 17 more 

I noticed that when I unset HADOOP_CONF_DIR, I'm able to work in the local
mode without any errors. I'm able to work with pre-installed Spark 1.0,
locally and on yarn, without any issues. It looks like I may be missing a
configuration step somewhere. Any thoughts on what may be causing this? 

NR



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-ExceptionInInitializerError-Unable-to-load-YARN-support-tp20775.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: java.lang.ExceptionInInitializerError/Unable to load YARN support

2014-12-06 Thread maven
I noticed that when I unset HADOOP_CONF_DIR, I'm able to work in the local
mode without any errors. 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-ExceptionInInitializerError-Unable-to-load-YARN-support-tp20560p20561.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org