rdblue commented on a change in pull request #25651: [SPARK-28948][SQL] Support
passing all Table metadata in TableProvider
URL: https://github.com/apache/spark/pull/25651#discussion_r328336988
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/csv/CSVDataSourceV2.scala
##########
@@ -35,9 +38,12 @@ class CSVDataSourceV2 extends FileDataSourceV2 {
CSVTable(tableName, sparkSession, options, paths, None, fallbackFileFormat)
}
- override def getTable(options: CaseInsensitiveStringMap, schema:
StructType): Table = {
- val paths = getPaths(options)
+ override def getTable(
+ schema: StructType,
+ partitions: Array[Transform],
+ properties: util.Map[String, String]): Table = {
+ val paths = getPaths(properties)
val tableName = getTableName(paths)
- CSVTable(tableName, sparkSession, options, paths, Some(schema),
fallbackFileFormat)
+ CSVTable(tableName, sparkSession, properties, paths, Some(schema),
fallbackFileFormat)
Review comment:
`partitions` is ignored in all of these implementations. That looks like a
bug to me.
I can understand not wanting to support partitioning that is passed in right
away. In that case, this should get the partitioning of the `CSVTable` and
check it against the incoming partitioning here and throw an exception if it
doesn't match.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]