Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102657558
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +212,41 @@ private[csv] class UnivocityParser(
}
numMalformedRecords += 1
None
- } else if (options.failFast && schema.length != tokens.length) {
+ } else if (options.failFast && dataSchema.length != tokens.length) {
throw new RuntimeException(s"Malformed line in FAILFAST mode: " +
s"${tokens.mkString(options.delimiter.toString)}")
} else {
- val checkedTokens = if (options.permissive && schema.length >
tokens.length) {
- tokens ++ new Array[String](schema.length - tokens.length)
- } else if (options.permissive && schema.length < tokens.length) {
- tokens.take(schema.length)
+ val checkedTokens = if (options.permissive) {
+ // If a length of parsed tokens is not equal to expected one, it
makes the length the same
+ // with the expected. If the length is shorter, it adds extra
tokens in the tail.
+ // If longer, it drops extra tokens.
--- End diff --
Should we also put that malformed record (shorter or longer) into a corrupt
field?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]