Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102667789
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +212,41 @@ private[csv] class UnivocityParser(
}
numMalformedRecords += 1
None
- } else if (options.failFast && schema.length != tokens.length) {
+ } else if (options.failFast && dataSchema.length != tokens.length) {
throw new RuntimeException(s"Malformed line in FAILFAST mode: " +
s"${tokens.mkString(options.delimiter.toString)}")
} else {
- val checkedTokens = if (options.permissive && schema.length >
tokens.length) {
- tokens ++ new Array[String](schema.length - tokens.length)
- } else if (options.permissive && schema.length < tokens.length) {
- tokens.take(schema.length)
+ val checkedTokens = if (options.permissive) {
+ // If a length of parsed tokens is not equal to expected one, it
makes the length the same
+ // with the expected. If the length is shorter, it adds extra
tokens in the tail.
+ // If longer, it drops extra tokens.
+ val lengthSafeTokens = if (dataSchema.length > tokens.length) {
+ tokens ++ new Array[String](dataSchema.length - tokens.length)
+ } else if (dataSchema.length < tokens.length) {
+ tokens.take(dataSchema.length)
+ } else {
+ tokens
+ }
+
+ // If we need to handle corrupt fields, it adds an extra token to
skip a field for malformed
+ // strings when loading parsed tokens into a resulting `row`.
+ corruptFieldIndex.map { corrFieldIndex =>
+ val (front, back) = lengthSafeTokens.splitAt(corrFieldIndex)
+ front ++ new Array[String](1) ++ back
--- End diff --
We have two options; 1) we just revert this part, or (2) modify this part
to avoid the allocation based on this code. cc: @HyukjinKwon
e.x.)
This is just an example and it seems to be a little hard to understand.
```
val parsedTokens = new Array[String](schema.length)
...
// If we need to handle corrupt fields, it adds an extra token to
skip a field for malformed
// strings when loading parsed tokens into a resulting `row`.
corruptFieldIndex.map { corrFieldIndex =>
lengthSafeTokens.splitAt(corrFieldIndex) match { case (front,
back) =>
front.zipWithIndex.foreach { case (s, i) =>
parsedTokens(i) = s
}
back.zipWithIndex.foreach { case (s, i) =>
parsedTokens(schema.length - back.length + i) = s
}
}
parsedTokens
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]