Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21145#discussion_r186927701
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/DataSourceReader.java
 ---
    @@ -76,5 +76,5 @@
        * If this method fails (by throwing an exception), the action would 
fail and no Spark job was
        * submitted.
        */
    -  List<DataReaderFactory<Row>> createDataReaderFactories();
    +  List<InputPartition<Row>> planInputPartitions();
    --- End diff --
    
    in the hadoop world there is `InputFormat.getSplits`, shall we follow and 
use `getInputPartitions` here?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to