vinothchandar commented on a change in pull request #2651:
URL: https://github.com/apache/hudi/pull/2651#discussion_r601170320



##########
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/HoodieTableConfig.java
##########
@@ -57,6 +58,7 @@
   public static final String HOODIE_TABLE_TYPE_PROP_NAME = "hoodie.table.type";
   public static final String HOODIE_TABLE_VERSION_PROP_NAME = 
"hoodie.table.version";
   public static final String HOODIE_TABLE_PRECOMBINE_FIELD = 
"hoodie.table.precombine.field";
+  public static final String HOODIE_TABLE_PARTITION_COLUMNS = 
"hoodie.table.partition.columns";

Review comment:
       I think we should persist the key generator class. and not the partition 
columns themselves? let me think over this more. 

##########
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/HoodieTableConfig.java
##########
@@ -193,6 +195,14 @@ public String getPreCombineField() {
     return props.getProperty(HOODIE_TABLE_PRECOMBINE_FIELD);
   }
 
+  public Option<String[]> getPartitionColumns() {

Review comment:
       why not use an empty array to signify no partition columns?

##########
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/hudi/DefaultSource.scala
##########
@@ -79,39 +81,52 @@ class DefaultSource extends RelationProvider
     val allPaths = path.map(p => Seq(p)).getOrElse(Seq()) ++ readPaths
 
     val fs = FSUtils.getFs(allPaths.head, 
sqlContext.sparkContext.hadoopConfiguration)
-    val globPaths = HoodieSparkUtils.checkAndGlobPathIfNecessary(allPaths, fs)
-
-    val tablePath = DataSourceUtils.getTablePath(fs, globPaths.toArray)
+    // Use the HoodieFileIndex only if the 'path' has specified with no "*" 
contains.
+    // And READ_PATHS_OPT_KEY has not specified.
+    // Or else we use the original way to read hoodie table.
+    val useHoodieFileIndex = path.isDefined && !path.get.contains("*") &&

Review comment:
       can we guard this feature itself with a data source option as well. i.e 
   
   `val useHoodieFileIndex = hoodieFileIndexEnabled && path.isDefined && 
!path.get.contains("*") && ..` 
   

##########
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala
##########
@@ -112,12 +112,15 @@ private[hudi] object HoodieSparkSqlWriter {
         val archiveLogFolder = parameters.getOrElse(
           HoodieTableConfig.HOODIE_ARCHIVELOG_FOLDER_PROP_NAME, "archived")
 
+        val partitionColumns = 
parameters.getOrElse(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, null)

Review comment:
       the partition path can also be generated from a key generator, right? we 
need to think this case through more. 

##########
File path: hudi-common/src/main/java/org/apache/hudi/common/fs/FSUtils.java
##########
@@ -276,6 +276,13 @@ public static void processFiles(FileSystem fs, String 
basePathStr, Function<File
     }
   }
 
+  public static FileStatus[] getFilesInPartition(HoodieEngineContext 
engineContext, HoodieMetadataConfig metadataConfig,
+                                                 String basePathStr, Path 
partitionPath) throws IOException {
+    HoodieTableMetadata tableMetadata = 
HoodieTableMetadata.create(engineContext, metadataConfig, basePathStr,

Review comment:
       needs to be closed. otherwise can leak connections.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to