Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11293#discussion_r79246548
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -127,33 +166,30 @@ abstract class Catalog {
* @param name name of the function
* @param className fully qualified class name, e.g.
"org.apache.spark.util.MyFunc"
*/
-case class Function(
- name: String,
- className: String
-)
+case class CatalogFunction(name: String, className: String)
/**
* Storage format, used to describe how a partition or a table is stored.
*/
-case class StorageFormat(
- locationUri: String,
- inputFormat: String,
- outputFormat: String,
- serde: String,
- serdeProperties: Map[String, String]
-)
+case class CatalogStorageFormat(
+ locationUri: Option[String],
+ inputFormat: Option[String],
+ outputFormat: Option[String],
+ serde: Option[String],
+ serdeProperties: Map[String, String])
/**
* A column in a table.
*/
-case class Column(
- name: String,
- dataType: String,
- nullable: Boolean,
- comment: String
-)
+case class CatalogColumn(
+ name: String,
+ // This may be null when used to create views. TODO: make this
type-safe; this is left
+ // as a string due to issues in converting Hive varchars to and from
SparkSQL strings.
--- End diff --
I don't remember the details, but it's something like SparkSQL ignores the
varchar limit but Hive doesn't (or the other way round?) such that strings are
truncated or something. If you try to refactor this using `DataType` you'll see
what I mean :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]