[
https://issues.apache.org/jira/browse/SPARK-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14546093#comment-14546093
]
Daniel Mahler commented on SPARK-7344:
--------------------------------------
The problem occurs even when I use `spark-ec2 --hadoop-major-version=2`. This
only makes spark use hadoop-2.0.0 though.
I do not know of a way to make spark-ec2 launch a spark cluster with a more
recent hadoop version.
Having a way for spark-ec2 to launch clusters with more recent versions of
hadoop may well be the solution to this problem.
> Spark hangs reading and writing to the same S3 bucket
> -----------------------------------------------------
>
> Key: SPARK-7344
> URL: https://issues.apache.org/jira/browse/SPARK-7344
> Project: Spark
> Issue Type: Bug
> Components: EC2
> Affects Versions: 1.2.0, 1.2.1, 1.2.2, 1.3.0, 1.3.1
> Environment: AWS EC2
> Reporter: Daniel Mahler
>
> The following code will hang if the `outprefix` is in an S3 bucket
> def copy1 = "s3n://mybucket/copy1"
> def copy2 = "s3n://mybucket/copy2"
> val txt1 = sc.textFile(inpath)
> txt1.count
> val res = txt.saveAsTextFile(copy1)
> val txt2 = sc.textFile(copy1 +"/part-*")
> txt2.count
> txt2.saveAsTextFile(copy2) // <- HANGS HERE
> val txt3 = sc.textFile(copy2 +"/part-*")
> txt3.count
> The problem goew away if copy1 and copy2 are in distinct S2 buckets or when
> using HDFS instead of S3
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]