[
https://issues.apache.org/jira/browse/SPARK-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563609#comment-14563609
]
Matthew Goodman commented on SPARK-7909:
----------------------------------------
There are 11 folders in /root/spark/work/app-20150528200603-0000/, all with the
same traceback below different only in time of error
{code:title=Spark worker Traceback|borderStyle=solid}
15/05/28 20:06:04 INFO executor.CoarseGrainedExecutorBackend: Registered signal
handlers for [TERM, HUP, INT]
Exception in thread "main" java.lang.ExceptionInInitializerError
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:146)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:245)
at
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.<init>(Groups.java:55)
at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182)
at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235)
at
org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:249)
at
org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:50)
at
org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:353)
at
org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
... 3 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
... 10 more
Caused by: java.lang.UnsatisfiedLinkError:
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
at
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native Method)
at
org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
at
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:38)
... 15 more
{code}
My launch script is as follows:
{code:title=Spark Launch Call|borderStyle=solid}
bash spark-ec2 --spark-version=ab62d73ddb973c25de043e8e9ade7800adf244e8
--spark-ec2-git-repo=https://github.com/3scan/spark-ec2
--spark-ec2-git-branch=branch-1.4 --key-pair=blahblahblah
--identity-file=blahblahblah.pem --region us-west-2 --user-data
/home/meawoppl/repos/3scan-analysis/spark/linux-bootstrap.sh login test-cluster
{code}
I am going to try the prebuilt spark next. I suspect this is surrounding the
compiled/checked out version that I am running? Not sure.
> spark-ec2 and associated tools not py3 ready
> --------------------------------------------
>
> Key: SPARK-7909
> URL: https://issues.apache.org/jira/browse/SPARK-7909
> Project: Spark
> Issue Type: Improvement
> Components: EC2
> Environment: ec2 python3
> Reporter: Matthew Goodman
>
> At present there is not a possible permutation of tools that supports Python3
> on both the launching computer and running cluster. There are a couple
> problems involved:
> - There is no prebuilt spark binary with python3 support.
> - spark-ec2/spark/init.sh contains inline py3 unfriendly print statements
> - Config files for cluster processes don't seem to make it to all nodes in a
> working format.
> I have fixes for some of this, but the config and running context debugging
> remains elusive to me.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]