We are getting this exception occasionally after a job runs for a month or
more.
The files don't seem to be corrupt from our testing, so not sure what this
error means. Task resources & network connectivity seem fine. How would
you debug this?
ink) (1/1)#52423 (595ced3edfe32bb7d826955f1a195a29)
rage-for-state-backend-td28362.html
>
> [2]
> https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
> On 13/01/2021 19:20, Billy Bain wrote:
>
> I'm trying to use readTextFile() to access files in S3. I have verified
> the s3 key and secret are clean and the s3 path is
I sent this a little prematurely. Will the streaming process find new
directories under the parent?
If the input path is
s3://foo.bar/
and directories are added daily, should I expect that the newly added
directories+files will get processed?
Thanks!
Wayne
On 2021/01/21 23:20:41, Billy
I have a Streaming process where new directories are added daily in S3.
s3://foo/bar/2021-01-18/data.gz
s3://foo/bar/2021-01-19/data.gz
s3://foo/bar/2021-01-20/data.gz
If I start the process, it won't pick up anything other than the
directories visible when the process was started.
The textInput
I'm trying to use readTextFile() to access files in S3. I have verified the
s3 key and secret are clean and the s3 path is similar to
com.somepath/somefile. (the names changed to protect the guilty)
Any idea what I'm missing?
2021-01-13 12:12:43,836 DEBUG
org.apache.flink.streaming.api.functions.
android.sinkTo(sink);
env.execute("zMarket Android");
}
}
On Tue, Jan 5, 2021 at 5:59 AM Arvid Heise wrote:
> Hi Billy,
>
> the exception is happening on the output side. Input side looks fine.
> Could you maybe post more information about the sink?
>
&
I am trying to implement a class that will work similar to AvroFileFormat.
This tar archive has a very specific format. It has only one file inside
and that file is line delimited JSON.
I get this exception, but all the data is written to the temporary files. I
have checked that my code isn't clo
We have an input file that is tarred and compressed to 12gb. It is about
50gb uncompressed.
With readTextFile(), I see it uncompress the file but then flink doesn't
seem to handle the untar portion. It's just a single file. (We don't
control the input format)
foo.tar.gz 12gb
foo.tar 50gb
then un
Of course I found it shortly after submitting my query.
compile group: 'org.apache.flink', name: 'flink-connector-files', version:
'1.12.0'
On 2020/12/24 15:57:20, Billy Bain wrote:
> I can't seem to find the org.apache.flink.connector.file.sink
I can't seem to find the org.apache.flink.connector.file.sink.FileSink
class.
I can find the StreamingFileSink, but not FileSink referenced here:
https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/connectors/file_sink.html
Am I missing a dependency?
compile group: 'org.apache.f
I am new to Flink and am trying to process a file and write it out
formatted as JSON.
This is a much simplified version.
public class AndroidReader {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecuti
11 matches
Mail list logo