Seemed like I was not able connect to sts.amazonaws.com. Fixed that error.
Now spark write to s3 is able to create folder structure on s3 but on final
file write it fails with below big error:
org.apache.spark.SparkException: Job aborted.
at
org.apache.spark.sql.execution.datasources.FileFormatWr
Hi,
I tried doing what Vladimir suggested. But no luck there either. My guess
is that it has something to do with securityContext.fsGroup. I am trying to
pass yaml file path along with spark submit command. My yaml file content
is
```
apiVersion: v1
kind: Pod
spec:
securityContext:
fsGr
Hi,
the fsGroup setting should match the id Spark is running at. When building
from source, that id is 185, and you can use "docker inspect "
to double-check.
On Wed, Feb 10, 2021 at 11:43 AM Rishabh Jain
wrote:
> Hi,
>
> I tried doing what Vladimir suggested. But no luck there either. My guess
On 9 Feb 2021, at 19:46, Rishabh Jain wrote:
Hi,
We are trying to access S3 from spark job running on EKS cluster pod. I
have a service account that has an IAM role attached with full S3
permission. We are using DefaultCredentialsProviderChain. But still we are
getting 403 Forbidden from S3.
Hi,
We are trying to access S3 from spark job running on EKS cluster pod. I
have a service account that has an IAM role attached with full S3
permission. We are using DefaultCredentialsProviderChain. But still we are
getting 403 Forbidden from S3.
Is there anything wrong with our approach?
*Th