viirya commented on a change in pull request #27366:
URL: https://github.com/apache/spark/pull/27366#discussion_r443654085



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
##########
@@ -375,15 +378,19 @@ class JacksonParser(
   private def convertObject(
       parser: JsonParser,
       schema: StructType,
-      fieldConverters: Array[ValueConverter]): InternalRow = {
+      fieldConverters: Array[ValueConverter],
+      structFilters: StructFilters = new NoopFilters()): Option[InternalRow] = 
{
     val row = new GenericInternalRow(schema.length)
     var badRecordException: Option[Throwable] = None
+    var skipRow = false
 
-    while (nextUntil(parser, JsonToken.END_OBJECT)) {
+    structFilters.reset()
+    while (!skipRow && nextUntil(parser, JsonToken.END_OBJECT)) {
       schema.getFieldIndex(parser.getCurrentName) match {
         case Some(index) =>
           try {
             row.update(index, fieldConverters(index).apply(parser))
+            skipRow = structFilters.skipRow(row, index)
           } catch {
             case e: SparkUpgradeException => throw e
             case NonFatal(e) =>

Review comment:
       If any badRecordException is happened, `structFilters.skipRow` is not 
ran for that index and some predicates might not be with all references set in 
the row. That's said we could fail to filter out some rows in the case. I guess 
that it is fine because we should have Filter on top of Json scan operator to 
catch them. Seems we can skip `structFilters.skipRow` once any 
badRecordException is happened?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to