[
https://issues.apache.org/jira/browse/FLINK-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17130463#comment-17130463
]
Jingsong Lee commented on FLINK-18023:
--------------------------------------
Batch read&write:
# write multiple times, read
# write overwrite
Streaming read: FLINK-18237
Streaming sink test:
# Create a {{datagen}} as the source table. The source table generates 5
records per second. And it generates totally 1000 records.
# Insert the source table into a partitioned filesystem table. Use
{{process-time}} as commit trigger. Verify partitions are being committed as
the job progresses. Verify success file is written for each partition. Verify
number of records after the job finishes.
# Insert the source table into a partitioned filesystem table. Use
{{partition-time}} as commit trigger and the timestamp is extracted from a
single partition column. Verify success file is written for each partition.
Verify number of records after the job finishes.
# Insert the source table into a non-partitioned filesystem table. Verify
number of records after the job finishes.
# Insert the source table into a partitioned filesystem table. Use
{{process-time}} as commit trigger. Kill one TM during job execution. Verify
number of records after the job finishes.
> E2E tests manually for new filesystem connector
> -----------------------------------------------
>
> Key: FLINK-18023
> URL: https://issues.apache.org/jira/browse/FLINK-18023
> Project: Flink
> Issue Type: Sub-task
> Components: Connectors / FileSystem, Tests
> Affects Versions: 1.11.0
> Reporter: Danny Chen
> Assignee: Jingsong Lee
> Priority: Blocker
> Fix For: 1.11.0
>
>
> - test all supported formats
> - test compatibility with Hive
> - test streaming sink
--
This message was sent by Atlassian Jira
(v8.3.4#803005)