Re: [DISCUSS] writing structured streaming dataframe to custom S3 buckets?

2019-11-08 Thread Steve Loughran
> spark.sparkContext.hadoopConfiguration.set("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") This is some superstition which seems to get carried through stack overflow articles. You do not need to declare the implementation class for s3a:// any more than you have to do for

[DISCUSS] writing structured streaming dataframe to custom S3 buckets?

2019-10-29 Thread Aniruddha P Tekade
Hello, I have a local S3 service that is writable and readable using AWS sdk APIs. I created the spark session and then set the hadoop configurations as follows - // Create Spark Session val spark = SparkSession .builder() .master("local[*]") .appName("S3Loaders")