rdblue commented on a change in pull request #25651: [SPARK-28948][SQL] Support
passing all Table metadata in TableProvider
URL: https://github.com/apache/spark/pull/25651#discussion_r335224715
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/connector/catalog/V1Table.scala
##########
@@ -51,38 +52,36 @@ private[sql] case class V1Table(v1Table: CatalogTable)
extends Table {
}
}
- def catalogTable: CatalogTable = v1Table
-
lazy val options: Map[String, String] = {
- v1Table.storage.locationUri match {
+ catalogTable.storage.locationUri match {
case Some(uri) =>
- v1Table.storage.properties + ("path" -> uri.toString)
+ catalogTable.storage.properties + ("path" -> uri.toString)
case _ =>
- v1Table.storage.properties
+ catalogTable.storage.properties
}
}
- override lazy val properties: util.Map[String, String] =
v1Table.properties.asJava
+ override lazy val properties: util.Map[String, String] =
catalogTable.properties.asJava
Review comment:
V2 has only table properties, so I think the question is whether we want to
mix options into those table properties directly, or whether we want to prefix
them so they can be recovered. I'm in favor of being able to recover them so we
can constract the catalog table as it would be in the v1 path.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]