Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102660814
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +212,41 @@ private[csv] class UnivocityParser(
}
numMalformedRecords += 1
None
- } else if (options.failFast && schema.length != tokens.length) {
+ } else if (options.failFast && dataSchema.length != tokens.length) {
throw new RuntimeException(s"Malformed line in FAILFAST mode: " +
s"${tokens.mkString(options.delimiter.toString)}")
} else {
- val checkedTokens = if (options.permissive && schema.length >
tokens.length) {
- tokens ++ new Array[String](schema.length - tokens.length)
- } else if (options.permissive && schema.length < tokens.length) {
- tokens.take(schema.length)
+ val checkedTokens = if (options.permissive) {
+ // If a length of parsed tokens is not equal to expected one, it
makes the length the same
+ // with the expected. If the length is shorter, it adds extra
tokens in the tail.
+ // If longer, it drops extra tokens.
--- End diff --
Yup, I agree in a way but I guess "it is pretty common that CSV is
malformed in this way" (said by the analysis team in my company). Could we
leave it as is for now here?
Let me try to raise a different JIRA after checking R's `read.csv` or other
libraries.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]