Github user sureshthalamati commented on a diff in the pull request: https://github.com/apache/spark/pull/19136#discussion_r137453919 --- Diff: sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/DataSourceV2Reader.java --- @@ -0,0 +1,126 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.sql.sources.v2.reader; + +import java.io.IOException; +import java.util.List; +import java.util.stream.Collectors; + +import org.apache.spark.annotation.Experimental; +import org.apache.spark.annotation.InterfaceStability; +import org.apache.spark.sql.Row; +import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder; +import org.apache.spark.sql.catalyst.encoders.RowEncoder; +import org.apache.spark.sql.catalyst.expressions.UnsafeRow; +import org.apache.spark.sql.types.StructType; + +/** + * The main interface and minimal requirement for a data source reader. The implementations should + * at least implement the full scan logic, users can mix in more interfaces to implement scan + * optimizations like column pruning, filter push down, etc. + * + * There are mainly 2 kinds of scan optimizations: + * 1. push operators downward to the data source, e.g., column pruning, filter push down, etc. + * 2. propagate information upward to Spark, e.g., report statistics, report ordering, etc. + * Spark first applies all operator push down optimizations which this data source supports. Then + * Spark collects information this data source provides for further optimizations. Finally Spark + * issues the scan request and does the actual data reading. + */ +public abstract class DataSourceV2Reader { + + /** + * Returns the actual schema of this data source reader, which may be different from the physical + * schema of the underlying storage, as column pruning or other optimizations may happen. + */ + public abstract StructType readSchema(); + + /** + * Returns a list of read tasks, each task is responsible for outputting data for one RDD + * partition, which means the number of tasks returned here is same as the number of RDD + * partitions this scan outputs. + * + * Note that, this may not be a full scan if the data source reader mixes in other optimization + * interfaces like column pruning, filter push down, etc. These optimizations are applied before + * Spark issues the scan request. + */ + protected abstract List<ReadTask<Row>> createReadTasks(); + + /** + * Inside Spark, the input rows will be converted to `UnsafeRow`s before processing. To avoid + * this conversion, implementations can overwrite this method and output `UnsafeRow`s directly. + * Note that, this is an experimental and unstable interface, as `UnsafeRow` is not public and + * may get changed in future Spark versions. + * + * If implementations overwrite this method, `createReadTasks` will never be called and they can + * just throw an exception in `createReadTasks`. + */ + @Experimental + @InterfaceStability.Unstable + public List<ReadTask<UnsafeRow>> createUnsafeRowReadTasks() { --- End diff -- I really like the new API's flexibility to implement the different types of support. Considering UnsafeRow is unstable , Would it be possible to move createUnsafeRowReadTasks to a different interface ? That might make data source implement two types of data sources one with Row , and another one with UnsafeRow and make it easily configurable based on the spark version.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org