[
https://issues.apache.org/jira/browse/HUDI-184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16981964#comment-16981964
]
Vinoth Chandar commented on HUDI-184:
-------------------------------------
>> Yes, It seems the ingestion and compaction steps are independent of each
>> other? We just let them exist in the same Spark job? If so, it's also not a
>> problem in Flink.
yes. they are independent and compaction can run concurrently as ingestion is
running as well. In spark, we can run two spark jobs (i.e the jobs tab you see
in Spark UI) in parallel within the same physical set of executors.. Can Flink
allow us to do this ?
> Integrate Hudi with Apache Flink
> --------------------------------
>
> Key: HUDI-184
> URL: https://issues.apache.org/jira/browse/HUDI-184
> Project: Apache Hudi (incubating)
> Issue Type: New Feature
> Components: Write Client
> Reporter: vinoyang
> Assignee: vinoyang
> Priority: Major
>
> Apache Flink is a popular streaming processing engine.
> Integrating Hudi with Flink is a valuable work.
> The discussion mailing thread is here:
> [https://lists.apache.org/api/source.lua/1533de2d4cd4243fa9e8f8bf057ffd02f2ac0bec7c7539d8f72166ea@%3Cdev.hudi.apache.org%3E]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)