vladanvasi-db commented on code in PR #47906:
URL: https://github.com/apache/spark/pull/47906#discussion_r1775214133
##########
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/csv/UnivocityParserSuite.scala:
##########
@@ -323,6 +323,41 @@ class UnivocityParserSuite extends SparkFunSuite with
SQLHelper {
parameters = Map("fieldName" -> "`i`", "fields" -> ""))
}
+ test("Bad records test in permissive mode") {
+ def checkBadRecord(
+ input: String = "1,a",
+ dataSchema: StructType = StructType.fromDDL("i INTEGER, s STRING, d
DOUBLE"),
+ requiredSchema: StructType = StructType.fromDDL("i INTEGER, s STRING"),
+ options: Map[String, String] = Map("mode" -> "PERMISSIVE")):
BadRecordException = {
+ val csvOptions = new CSVOptions(options, false, "UTC")
+ val parser = new UnivocityParser(dataSchema, requiredSchema, csvOptions,
Seq())
+ intercept[BadRecordException] {
+ parser.parse(input)
+ }
+ }
+
+ // Bad record exception caused by conversion error
Review Comment:
The `PERMISSIVE` mode will "Fill the row in a best-effort manner", so if it
throws the exception, also the `FAILFAST` mode will throw the exception as
well, so we do not need to check that one additionally. On the other hand,
`DROPMALFORMED` mode will just drop the row and not throw the exception, so it
is not applicable here when checking `BadRecordException`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]