GitHub user cloud-fan opened a pull request: https://github.com/apache/spark/pull/23086
[SPARK-25528][SQL] data source v2 API refactor (batch read) ## What changes were proposed in this pull request? This is the first step of the data source v2 API refactor [proposal](https://docs.google.com/document/d/1uUmKCpWLdh9vHxP7AWJ9EgbwB_U6T3EJYNjhISGmiQg/edit?usp=sharing) It adds the new API for batch read, without removing the old APIs, as they are still needed for streaming sources. More concretely, it adds 1. `TableProvider`, works like an anonymous catalog 2. `Table`, represents a structured data set. 3. `ScanBuilder` and `Scan`, a logical represents of data source scan 4. `Batch`, a physical representation of data source batch scan. ## How was this patch tested? existing tests You can merge this pull request into a Git repository by running: $ git pull https://github.com/cloud-fan/spark refactor-batch Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/23086.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #23086 ---- commit f06b5c58b1a890d425abd575fa6f4c40da7c4b3d Author: Wenchen Fan <wenchen@...> Date: 2018-11-19T11:05:07Z data source v2 API refactor (batch read) ---- --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org