> spark.sparkContext.hadoopConfiguration.set("spark.hadoop.fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem")
This is some superstition which seems to get carried through stack overflow
articles. You do not need to declare the implementation class for s3a://
any more than you have to do for
Hello,
I have a local S3 service that is writable and readable using AWS sdk APIs.
I created the spark session and then set the hadoop configurations as
follows -
// Create Spark Session
val spark = SparkSession
.builder()
.master("local[*]")
.appName("S3Loaders")