cloud-fan commented on a change in pull request #35229:
URL: https://github.com/apache/spark/pull/35229#discussion_r786714900
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
##########
@@ -434,7 +434,7 @@ case class DataSource(
hs.partitionSchema,
"in the partition schema",
equality)
- DataSourceUtils.verifySchema(hs.fileFormat, hs.dataSchema)
+ DataSourceUtils.checkFieldType(hs.fileFormat, hs.dataSchema)
Review comment:
I've read https://issues.apache.org/jira/browse/PARQUET-1809 , I don't
think it's related to field name restriction. It improves the filter API to not
simply split the string by dot to get the column path. Instead, the API allows
the caller side to pass `String[]` directly as the column path. It also implies
that Parquet has no limitation on the field name.
Most databases can use any string as column name by quoting them, so it's
really a weird design choice for a file format to put a limitation on the field
name. Anyway, this is out of Spark's control, but I think Spark should not
"guess" the name limitation of a file format and enforce it at Spark's side.
"Fail fast" is not beneficial enough here, comparing to the risk of not being
able to read valid files and stopping users to use Spark.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]