Re: Could not flush and close the file system output stream to s3a, is this fixed?

2017-12-14 Thread Bowen Li
Hi, The problem reported in FLINK-7590 only happened one time on our end. And, as you can see from its comments, we suspected it's caused by AWS-SDK or Hadoop's s3a implementation, which we have no control over. Flink 1.4.0 has its own S3 implementations. I haven't tried it yet. On Thu, Dec

Re: Could not flush and close the file system output stream to s3a, is this fixed?

2017-12-14 Thread Stephan Ewen
@Hao Can you provide a better formatted stack trace? Very hard to read it like it is... On Thu, Dec 14, 2017 at 11:05 AM, Fabian Hueske wrote: > Bowen Li (in CC) closed the issue but there is no fix (or at least it is > not linked in the JIRA). > Maybe it was resolved in

Re: Could not flush and close the file system output stream to s3a, is this fixed?

2017-12-14 Thread Fabian Hueske
Bowen Li (in CC) closed the issue but there is no fix (or at least it is not linked in the JIRA). Maybe it was resolved in another issue or can be differently resolved. @Bowen, can you comment on how to fix this problem? Will it work in Flink 1.4.0? Thank you, Fabian 2017-12-13 5:28 GMT+01:00

Could not flush and close the file system output stream to s3a, is this fixed?

2017-12-12 Thread Hao Sun
https://issues.apache.org/jira/browse/FLINK-7590 I have a similar situation with Flink 1.3.2 on K8S = 2017-12-13 00:57:12,403 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: KafkaSource(maxwell.tickets) -> MaxwellFilter->Maxwell(maxwell.tickets) ->