JoshuaZhuCN commented on issue #7322: URL: https://github.com/apache/hudi/issues/7322#issuecomment-1350311469
> > @alexeykudinkin i don't understand what "write into the table by its id" means, just using sql like insert into/update/delete from db.table to write data? > > Correct. You can do the same from Spark DS. > @alexeykudinkin I think the query engine should not limit the writing method for querying data. Even for the tables created by Spakrsql, the query engine should be able to query new data regardless of the way in which the data is written in the spark datasource, spark sql, java client, flash sql, and flash stream apis, without requiring users to do additional operations for different writing methods when using the query engine > > @alexeykudinkin At present, the problem I encounter is not only that the Spark datasource cannot be read after it is written, but also that the Spark sql cannot be read after it is written by Flink using hive sync. In other words, the SparkSQL query can not immediately read new data in any other way except by writing data in SQL. Therefore, I think this is a problem that needs to be solved > > Interesting. Can you please create another issue specifically for this one as this hardly could be related? I'll verify it again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
