Github user snir commented on the issue:
https://github.com/apache/spark/pull/16578
There is a bug in this PR which causes a NullPointerException when
filtering a table that contains rows with null structs (happens in all format,
not only parquet).
Reproducing the crash is super simple:
test.json:
`{"a":{"b":1,"c":2}}`
`{}`
In Spark shell:
`val df=spark.read.json("test.json")`
`df.filter("a.b > 1").show()`
Other than that it works well for us.
Thanks.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]