Flink SQL reads and writes Hudi and synchronizes Hive tables via the Hudi
HMS Catalog,If the hive database has both the parquet table and the hudi
table, two different flink catalogs need to be registered, causing problems.
Not very friendly for data analysts to use. Yes spark does not have this
problem, you can use spark_catalog catalog to access hudi and parquet
tables, not sure if this problem is solved in hudi or flink?

Reply via email to