MaxGekk opened a new pull request #27239: [SPARK-30530][SQL] Fix filter 
pushdown for bad CSV records
URL: https://github.com/apache/spark/pull/27239
 
 
   ### What changes were proposed in this pull request?
   In the PR, I propose to fix the bug reported in SPARK-30530. CSV datasource 
returns invalid records in the case when `parsedSchema` is shorter than number 
of tokens returned by UniVocity parser. In the case `UnivocityParser.convert()` 
always throws `BadRecordException` independently from the result of applying 
filters.
   
   For the described case, I propose to save the exception in 
`badRecordException` and continue value conversion according to `parsedSchema`. 
If a bad record doesn't pass filters, `convert()` returns empty Seq otherwise 
throws `badRecordException`.
   
   ### Why are the changes needed?
   It fixes the bug reported in the JIRA ticket.
   
   ### Does this PR introduce any user-facing change?
   No
   
   ### How was this patch tested?
   Added new test from the JIRA ticket.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to