Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8026#discussion_r42583431
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala ---
@@ -544,11 +544,35 @@ abstract class HadoopFsRelation
private[sql](maybePartitionSpec: Option[Partitio
}
private def discoverPartitions(): PartitionSpec = {
- val typeInference =
sqlContext.conf.partitionColumnTypeInferenceEnabled()
// We use leaf dirs containing data files to discover the schema.
val leafDirs = fileStatusCache.leafDirToChildrenFiles.keys.toSeq
- PartitioningUtils.parsePartitions(leafDirs,
PartitioningUtils.DEFAULT_PARTITION_NAME,
- typeInference)
+ userDefinedPartitionColumns match {
+ case Some(schema) =>
+ val spec = PartitioningUtils.parsePartitions(
+ leafDirs, PartitioningUtils.DEFAULT_PARTITION_NAME, false)
+
+ // Without auto inference, all of value in the `row` should be
null or in StringType,
+ // we need to cast into the data type that user specified.
+ def castPartitionValueWithGivenSchema(row: InternalRow, schema:
StructType)
+ : InternalRow = {
--- End diff --
In order to avoid the weird wrapping here, I think you might be able to
just leave off the `: InternalRow` here, unless you somehow need it to appease
MiMa.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]