cloud-fan commented on a change in pull request #33212:
URL: https://github.com/apache/spark/pull/33212#discussion_r669487760



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
##########
@@ -418,6 +426,19 @@ class JacksonParser(
       }
     }
 
+    // When the input schema is setting to `nullable = false`, make sure the 
field is not null.
+    var index = 0
+    while (badRecordException.isEmpty && !skipRow && index < schema.length) {
+      if (!schema(index).nullable && row.isNullAt(index)) {
+        throw new IllegalSchemaArgumentException(
+          s"the null value found when parsing non-nullable field 
${schema(index).name}.")
+      }
+      if (!checkedIndexSet.contains(index)) {
+        skipRow = structFilters.skipRow(row, index)

Review comment:
       This is not a nullability issue, but a performance improvement. Spark 
will evaluate the filter again and drop the rows. Can we exclude this part from 
this PR? We need to backport the nullability fix but not the performance 
improvement.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to