I understand that there is some incompatibility with the API between Hadoop 2.6/2.7 and Amazon AWS SDK where they changed a signature of com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold. The JIRA indicates that this would be fixed in Hadoop 2.8. (https://issues.apache.org/jira/browse/HADOOP-12420)
My question is - what are people doing today to access S3? I am unable to find an older JAR of the AWS SDK to test with. Thanks, Ashish -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-5-1-Hadoop2-6-unable-to-write-to-S3-HADOOP-12420-tp25163.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org