wangxianghu commented on a change in pull request #1727:
URL: https://github.com/apache/hudi/pull/1727#discussion_r439159813



##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/AbstractHoodieClient.java
##########
@@ -19,52 +19,53 @@
 package org.apache.hudi.client;
 
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hudi.client.embedded.EmbeddedTimelineService;
-import org.apache.hudi.client.utils.ClientUtils;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hudi.client.embedded.AbstractEmbeddedTimelineService;
+import org.apache.hudi.client.util.ClientUtils;
+import org.apache.hudi.common.HoodieEngineContext;
 import org.apache.hudi.common.fs.FSUtils;
 import org.apache.hudi.common.table.HoodieTableMetaClient;
 import org.apache.hudi.common.util.Option;
 import org.apache.hudi.config.HoodieWriteConfig;
+import org.slf4j.Logger;

Review comment:
       Hi @vinothchandar, Thanks for feedback!
   yes, HoodieEngineContext is thin, It holds only common things, while spark 
related goes to HoodieSparkEngineContext, flink related goes to 
HoodieFlinkEngineContext... which both extends HoodieEngineContext .
   
   As it is already huge, We don't want to make too many changes. So we made no 
API/functionality changes for Spark RDD client, just abstracted it. BTW, I have 
verified it in flink engine before(replace RDD with List), it is doable.
   
   I'll roll back the log with log4j.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to