pengzhiwei2018 commented on a change in pull request #2651: URL: https://github.com/apache/hudi/pull/2651#discussion_r592039402
########## File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/hudi/HoodieFileIndex.scala ########## @@ -0,0 +1,279 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hudi + +import scala.collection.JavaConverters._ +import org.apache.hadoop.fs.{FileStatus, FileSystem, Path} +import org.apache.hudi.common.fs.FSUtils +import org.apache.hudi.common.model.HoodieBaseFile +import org.apache.hudi.common.table.{HoodieTableMetaClient, TableSchemaResolver} +import org.apache.hudi.common.table.view.HoodieTableFileSystemView +import org.apache.spark.internal.Logging +import org.apache.spark.sql.catalyst.{InternalRow, expressions} +import org.apache.spark.sql.SparkSession +import org.apache.spark.sql.avro.SchemaConverters +import org.apache.spark.sql.catalyst.expressions.{AttributeReference, BoundReference, Expression, InterpretedPredicate} +import org.apache.spark.sql.catalyst.util.{CaseInsensitiveMap, DateTimeUtils} +import org.apache.spark.sql.execution.datasources.{FileIndex, PartitionDirectory, PartitionUtils} +import org.apache.spark.sql.internal.SQLConf +import org.apache.spark.sql.types.StructType + +import scala.collection.mutable +import scala.collection.mutable.ListBuffer + +/** + * A File Index which support partition prune for hoodie snapshot and read-optimized + * query. + * Main steps to get the file list for query: + * 1、Load all files and partition values from the table path. + * 2、Do the partition prune by the partition filter condition. + * + * Note: + * Only when the URL_ENCODE_PARTITIONING_OPT_KEY is enable, we can store the partition columns + * to the hoodie.properties in HoodieSqlWriter when write table. So that the query can benefit + * from the partition prune. + */ +case class HoodieFileIndex( + spark: SparkSession, + basePath: String, Review comment: Hi @umehrot2 , IMO support reading data only from the `basePath` can simply the reading logical. It is clear that one table using one path. If we want to support multi paths, we must validate if all the paths come from the same table. It may make the things complex for the `HoodieFileIndex` mode. And the need for a multi path query can be achieved through the partition prune in `HoodieFileIndex` e.g. > For ex: Customer may not pass the actual path, but in hoodie.datasource.read.paths pass several partition paths: s3://basepath/partition1, s3://basepath/partition2 We can query the table with the partition filter condition to reach the some goal, just like this: `spark.read.format("hudi").filter("pt = partition1 and pt = partition2")` This is my option, WDYT? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
