lucasberlang commented on issue #7185:
URL: https://github.com/apache/hudi/issues/7185#issuecomment-1314962293

   Yes, I have the s3 credentials in the core-site.xml and in the 
flink-conf.yaml in the jobmanager and the tasmanager:
   ```xml
     <property>
       <name>fs.s3.awsAccessKeyId</name>
       <value>xxxxxxxx</value>
     </property>
     <property>
       <name>fs.s3.awsSecretAccessKey</name>
       <value>xxxxxxxx</value>
     </property>
     <property>
       <name>fs.s3.impl</name>
       <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
     </property>
   ```
   
   ```yaml
     flink-conf.yaml: |+
       jobmanager.rpc.address: flink-jobmanager-session
       taskmanager.numberOfTaskSlots: 2
       blob.server.port: 6124
       jobmanager.rpc.port: 6123
       taskmanager.rpc.port: 6122
       queryable-state.proxy.ports: 6125
       jobmanager.memory.process.size: 1600m
       taskmanager.memory.process.size: 1728m
       parallelism.default: 2
       execution.checkpointing.interval: 60s   
       metrics.reporter.prom.class: 
org.apache.flink.metrics.prometheus.PrometheusReporter
       metrics.reporters: prom
       metrics.reporter.prom.port: 9249
       s3.access-key: xxxxxxxx
       s3.secret-key: xxxxxxxx
   ```
   
   I don't think that is the problem since I can write in json format in the s3 
without problem. 
   Any other idea?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to